Sample records for false positive probabilities

  1. Cumulative risk of false positive test in relation to breast symptoms in mammography screening: a historical prospective cohort study.

    PubMed

    Singh, Deependra; Pitkäniemi, Janne; Malila, Nea; Anttila, Ahti

    2016-09-01

    Mammography has been found effective as the primary screening test for breast cancer. We estimated the cumulative probability of false positive screening test results with respect to symptom history reported at screen. A historical prospective cohort study was done using individual screening data from 413,611 women aged 50-69 years with 2,627,256 invitations for mammography screening between 1992 and 2012 in Finland. Symptoms (lump, retraction, and secretion) were reported at 56,805 visits, and 48,873 visits resulted in a false positive mammography result. Generalized linear models were used to estimate the probability of at least one false positive test and true positive at screening visits. The estimates were compared among women with and without symptoms history. The estimated cumulative probabilities were 18 and 6 % for false positive and true positive results, respectively. In women with a history of a lump, the cumulative probabilities of false positive test and true positive were 45 and 16 %, respectively, compared to 17 and 5 % with no reported lump. In women with a history of any given symptom, the cumulative probabilities of false positive test and true positive were 38 and 13 %, respectively. Likewise, women with a history of a 'lump and retraction' had the cumulative false positive probability of 56 %. The study showed higher cumulative risk of false positive tests and more cancers detected in women who reported symptoms compared to women who did not report symptoms at screen. The risk varies substantially, depending on symptom types and characteristics. Information on breast symptoms influences the balance of absolute benefits and harms of screening.

  2. Statistics provide guidance for indigenous organic carbon detection on Mars missions.

    PubMed

    Sephton, Mark A; Carter, Jonathan N

    2014-08-01

    Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.

  3. True detection limits in an experimental linearly heteroscedastic system. Part 1

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-11-01

    Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.

  4. False Positive Probabilities for all Kepler Objects of Interest: 1284 Newly Validated Planets and 428 Likely False Positives

    NASA Astrophysics Data System (ADS)

    Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.

    2016-05-01

    We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.

  5. True detection limits in an experimental linearly heteroscedastic system.. Part 2

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-11-01

    Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.

  6. Unmodeled observation error induces bias when inferring patterns and dynamics of species occurrence via aural detections

    USGS Publications Warehouse

    McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.

  7. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  9. Methods for threshold determination in multiplexed assays

    DOEpatents

    Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J

    2014-06-24

    Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.

  10. Bayesian performance metrics of binary sensors in homeland security applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Forrester, Thomas C.

    2008-04-01

    Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.

  11. Accounting for false-positive acoustic detections of bats using occupancy models

    USGS Publications Warehouse

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.

  12. Assessing environmental DNA detection in controlled lentic systems.

    PubMed

    Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin

    2014-01-01

    Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).

  13. Abort Trigger False Positive and False Negative Analysis Methodology for Threshold-Based Abort Detection

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon

    2015-01-01

    This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.

  14. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  15. The False Premises and False Promises of the Movement to Privatize Public Education.

    ERIC Educational Resources Information Center

    Hawley, Willis D.

    1995-01-01

    Argues that the movement to provide parents with financial incentives to send students to private schools will increase the racial, ethnic, and socioeconomic homogeneity of American schools. Six common assumptions about the positive effects of privatizing education are examined and deemed false. Probable costs of tuition vouchers for private…

  16. The reproducibility of research and the misinterpretation of p-values

    PubMed Central

    2017-01-01

    We wish to answer this question: If you observe a ‘significant’ p-value after doing a single unbiased experiment, what is the probability that your result is a false positive? The weak evidence provided by p-values between 0.01 and 0.05 is explored by exact calculations of false positive risks. When you observe p = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 3 : 1. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the p-value. And if you want to limit the false positive risk to 5%, you would have to assume that you were 87% sure that there was a real effect before the experiment was done. If you observe p = 0.001 in a well-powered experiment, it gives a likelihood ratio of almost 100 : 1 odds on there being a real effect. That would usually be regarded as conclusive. But the false positive risk would still be 8% if the prior probability of a real effect were only 0.1. And, in this case, if you wanted to achieve a false positive risk of 5% you would need to observe p = 0.00045. It is recommended that the terms ‘significant’ and ‘non-significant’ should never be used. Rather, p-values should be supplemented by specifying the prior probability that would be needed to produce a specified (e.g. 5%) false positive risk. It may also be helpful to specify the minimum false positive risk associated with the observed p-value. Despite decades of warnings, many areas of science still insist on labelling a result of p < 0.05 as ‘statistically significant’. This practice must contribute to the lack of reproducibility in some areas of science. This is before you get to the many other well-known problems, like multiple comparisons, lack of randomization and p-hacking. Precise inductive inference is impossible and replication is the only way to be sure. Science is endangered by statistical misunderstanding, and by senior people who impose perverse incentives on scientists. PMID:29308247

  17. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  18. False positive fecal coliform in biosolid samples assayed using A-1 medium.

    PubMed

    Baker, Katherine H; Redmond, Brady; Herson, Diane S

    2005-01-01

    Two most probable number (MPN) methods-lauryl tryptose broth with Escherichia coli broth confirmation and direct A-1 broth incubation (A-1)--were compared for the enumeration of fecal coliform in lime-treated biosolid. Fecal coliform numbers were significantly higher using the A-1 method. Analysis of positive A-1 tubes, however, indicated that a high percentage of these were false positives. Therefore, the use of A-1 broth for 40 CFR Part 503 Pathogen Reduction (CFR, 1993) compliance testing is not recommended.

  19. Similarity based false-positive reduction for breast cancer using radiographic and pathologic imaging features

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Samala, Ravi K.; Zhang, Jianying; Qian, Wei

    2010-03-01

    Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis (PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this method as a standalone classification system is evaluated in both the cases (general and specific). In another approach the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Kurt Vedros; Robert Youngblood

    This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less

  1. Optimising in situ gamma measurements to identify the presence of radioactive particles in land areas.

    PubMed

    Rostron, Peter D; Heathcote, John A; Ramsey, Michael H

    2014-12-01

    High-coverage in situ surveys with gamma detectors are the best means of identifying small hotspots of activity, such as radioactive particles, in land areas. Scanning surveys can produce rapid results, but the probabilities of obtaining false positive or false negative errors are often unknown, and they may not satisfy other criteria such as estimation of mass activity concentrations. An alternative is to use portable gamma-detectors that are set up at a series of locations in a systematic sampling pattern, where any positive measurements are subsequently followed up in order to determine the exact location, extent and nature of the target source. The preliminary survey is typically designed using settings of detector height, measurement spacing and counting time that are based on convenience, rather than using settings that have been calculated to meet requirements. This paper introduces the basis of a repeatable method of setting these parameters at the outset of a survey, for pre-defined probabilities of false positive and false negative errors in locating spatially small radioactive particles in land areas. It is shown that an un-collimated detector is more effective than a collimated detector that might typically be used in the field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Randomly auditing research labs could be an affordable way to improve research quality: A simulation study

    PubMed Central

    Zardo, Pauline; Graves, Nicholas

    2018-01-01

    The “publish or perish” incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have “child” labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of “child” and “parent” labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits’ efficacy. The main benefit of the audits was via the increase in effort in “child” and “parent” labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit. PMID:29649314

  3. Randomly auditing research labs could be an affordable way to improve research quality: A simulation study.

    PubMed

    Barnett, Adrian G; Zardo, Pauline; Graves, Nicholas

    2018-01-01

    The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.

  4. Oxybuprocaine induces a false-positive response in immunochromatographic SAS Adeno Test.

    PubMed

    Hoshino, Takeshi; Takanashi, Taiji; Okada, Morio; Uchida, Sunao

    2002-04-01

    To investigate whether a solution of oxybuprocaine hydrochloride, 0.4%, results in a false-positive response in an immunochromatographic SAS Adeno Test. Experimental study. Physiologic saline and 2% lidocaine. Each chemical (100 microl) was diluted in a transport medium. Five drops (200 microl) of the resultant solution were dispensed into the round sample well of a test device. Fifteen samples were tested in each group. Ten minutes after the start of the test, a colored line in the "specimen" portion of the test membrane was visually read as positive or negative by a masked technician. No positive reaction was observed in the control groups (physiologic saline and lidocaine). A false-positive reaction was observed in six samples (33.3%) in the oxybuprocaine group. The positive rate was significantly higher in the oxybuprocaine group compared with those in control groups (P = 0.0062, Fisher's extract probability test). Oxybuprocaine may induce a false-positive reaction in an immunochromatographic SAS Adeno Test. We recommend the use of lidocaine, instead of oxybuprocaine, for local anesthesia in taking eye swabs from patients with suspected adenovirus infection.

  5. [False positive results or what's the probability that a significant P-value indicates a true effect?

    PubMed

    Cucherat, Michel; Laporte, Silvy

    2017-09-01

    The use of statistical test is central in the clinical trial. At the statistical level, obtaining a P<0.05 allows to claim the effectiveness of the new studied treatment. However, given its underlying mathematical logic the concept of "P value" is often misinterpreted. It is often assimilated, mistakenly, to the likelihood that treatment is ineffective. Actually the "P value" gives an indirect information about the plausibility of the existence of treatment effect. With "P<0.05", the probability that the treatment is effective may vary depending on other statistical parameters which are the alpha level of risk, the power of the study and especially the a priori probability of the existence of treatment effect. A "P<0.05" does not always produce the same degree of certainty. Thus there exist situations where the risk of a result "P<0.05" is in reality a false positive is very high. This is the case if the power is low, if there is an inflation of the alpha risk or if the result is exploratory or chance discoveries. This possibility is important to take into consideration when interpreting the results of clinical trials in order to avoid pushing ahead significant results in appearance, but which are likely to be actually false positive results. Copyright © 2017 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  6. Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.

    PubMed

    Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P

    2018-01-01

    Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.

  7. A national study of breast and colorectal cancer patients' decision-making for novel personalized medicine genomic diagnostics.

    PubMed

    Issa, Amalia M; Tufail, Waqas; Atehortua, Nelson; McKeever, John

    2013-05-01

    Molecular diagnostics are increasingly being used to help guide decision-making for personalized medical treatment of breast and colorectal cancer patients. The main aim of this study was to better understand and determine breast and colorectal cancer patients' decision-making strategies and the trade-offs they make in deciding about characteristics of molecular genomic diagnostics for breast and colorectal cancer. We surveyed a nationally representative sample of 300 breast and colorectal cancer patients using a previously developed web-administered instrument. Eligibility criteria included patients aged 18 years and older with either breast or colorectal cancer. We explored several attributes and attribute levels of molecular genomic diagnostics in 20 scenarios. Our analysis revealed that both breast and colorectal cancer patients weighted the capability of molecular genomic diagnostics to determine the probability of treatment efficacy as being of greater importance than information provided to detect adverse events. The probability of either false-positive or -negative results was ranked highly as a potential barrier by both breast and colorectal patients. However, 78.6% of breast cancer patients ranked the possibility of a 'false-negative test result leading to undertreatment' higher than the 'chance of a false positive, which may lead to overtreatment' (68%). This finding contrasted with the views of colorectal cancer patients who ranked the chance of a false positive as being of greater concern than a false negative (72.8 vs 63%). Overall, cancer patients exhibited a high willingness to accept and pay for genomic diagnostic tests, especially among breast cancer patients. Cancer patients seek a test accuracy rate of 90% or higher. Breast and colorectal cancer patients' decisions about genomic diagnostics are influenced more by the probability of being cured than by avoiding potential severe adverse events. This study provides insights into the relative weight that breast and colorectal cancer patients place on various aspects of molecular genomic diagnostics, and the trade-offs they are willing to make among attributes of such tests.

  8. Clinical Ultrasound Is Safe and Highly Specific for Acute Appendicitis in Moderate to High Pre-test Probability Patients.

    PubMed

    Corson-Knowles, Daniel; Russell, Frances M

    2018-05-01

    Clinical ultrasound (CUS) is highly specific for the diagnosis of acute appendicitis but is operator-dependent. The goal of this study was to determine if a heterogeneous group of emergency physicians (EP) could diagnose acute appendicitis on CUS in patients with a moderate to high pre-test probability. This was a prospective, observational study of a convenience sample of adult and pediatric patients with suspected appendicitis. Sonographers received a structured, 20-minute CUS training on appendicitis prior to patient enrollment. The presence of a dilated (>6 mm diameter), non-compressible, blind-ending tubular structure was considered a positive study. Non-visualization or indeterminate studies were considered negative. We collected pre-test probability of acute appendicitis based on a 10-point visual analog scale (moderate to high was defined as >3), and confidence in CUS interpretation. The primary objective was measured by comparing CUS findings to surgical pathology and one week follow-up. We enrolled 105 patients; 76 had moderate to high pre-test probability. Of these, 24 were children. The rate of appendicitis was 36.8% in those with moderate to high pre-test probability. CUS were recorded by 33 different EPs. The sensitivity, specificity, and positive and negative likelihood ratios of EP-performed CUS in patients with moderate to high pre-test probability were 42.8% (95% confidence interval [CI] [25-62.5%]), 97.9% (95% CI [87.5-99.8%]), 20.7 (95% CI [2.8-149.9]) and 0.58 (95% CI [0.42-0.8]), respectively. The 16 false negative scans were all interpreted as indeterminate. There was one false positive CUS diagnosis; however, the sonographer reported low confidence of 2/10. A heterogeneous group of EP sonographers can safely identify acute appendicitis with high specificity in patients with moderate to high pre-test probability. This data adds support for surgical consultation without further imaging beyond CUS in the appropriate clinical setting.

  9. Preliminary performance assessment of biotoxin detection for UWS applications using a MicroChemLab device.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanderNoot, Victoria A.; Haroldsen, Brent L.; Renzi, Ronald F.

    2010-03-01

    In a multiyear research agreement with Tenix Investments Pty. Ltd., Sandia has been developing field deployable technologies for detection of biotoxins in water supply systems. The unattended water sensor or UWS employs microfluidic chip based gel electrophoresis for monitoring biological analytes in a small integrated sensor platform. This instrument collects, prepares, and analyzes water samples in an automated manner. Sample analysis is done using the {mu}ChemLab{trademark} analysis module. This report uses analysis results of two datasets collected using the UWS to estimate performance of the device. The first dataset is made up of samples containing ricin at varying concentrations andmore » is used for assessing instrument response and detection probability. The second dataset is comprised of analyses of water samples collected at a water utility which are used to assess the false positive probability. The analyses of the two sets are used to estimate the Receiver Operating Characteristic or ROC curves for the device at one set of operational and detection algorithm parameters. For these parameters and based on a statistical estimate, the ricin probability of detection is about 0.9 at a concentration of 5 nM for a false positive probability of 1 x 10{sup -6}.« less

  10. An empirical investigation into the role of subjective prior probability in searching for potentially missing items

    PubMed Central

    Fanshawe, T. R.

    2015-01-01

    There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267

  11. Human versus automation in responding to failures: an expected-value analysis

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Parasuraman, R.

    2000-01-01

    A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.

  12. VizieR Online Data Catalog: Stellar and planet properties for K2 candidates (Montet+, 2015)

    NASA Astrophysics Data System (ADS)

    Montet, B. T.; Morton, T. D.; Foreman-Mackey, D.; Johnson, J. A.; Hogg, D. W.; Bowler, B. P.; Latham, D. W.; Bieryla, A.; Mann, A. W.

    2017-09-01

    In this paper, we present stellar and planetary parameters for each system. We also analyze the false positive probability (FPP) of each system using vespa, a new publicly available, general-purpose implementation of the Morton (2012ApJ...761....6M) procedure to calculate FPPs for transiting planets. Through this analysis, as well as archival imaging, ground-based seeing-limited survey data, and adaptive optics imaging, we are able to confirm 21 of these systems as transiting planets at the 99% confidence level. Additionally, we identify six systems as false positives. (5 data files).

  13. A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks

    PubMed Central

    Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.

    2017-01-01

    This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023

  14. A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.

    PubMed

    Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O

    2017-05-27

    This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.

  15. Tracking Object Existence From an Autonomous Patrol Vehicle

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.

  16. Cost-effectiveness of anatomical and functional test strategies for stable chest pain: public health perspective from a middle-income country.

    PubMed

    Bertoldi, Eduardo G; Stella, Steffen F; Rohde, Luis Eduardo P; Polanczyk, Carisi A

    2017-05-04

    The aim of this research is to evaluate the relative cost-effectiveness of functional and anatomical strategies for diagnosing stable coronary artery disease (CAD), using exercise (Ex)-ECG, stress echocardiogram (ECHO), single-photon emission CT (SPECT), coronary CT angiography (CTA) or stress cardiacmagnetic resonance (C-MRI). Decision-analytical model, comparing strategies of sequential tests for evaluating patients with possible stable angina in low, intermediate and high pretest probability of CAD, from the perspective of a developing nation's public healthcare system. Hypothetical cohort of patients with pretest probability of CAD between 20% and 70%. The primary outcome is cost per correct diagnosis of CAD. Proportion of false-positive or false-negative tests and number of unnecessary tests performed were also evaluated. Strategies using Ex-ECG as initial test were the least costly alternatives but generated more frequent false-positive initial tests and false-negative final diagnosis. Strategies based on CTA or ECHO as initial test were the most attractive and resulted in similar cost-effectiveness ratios (I$ 286 and I$ 305 per correct diagnosis, respectively). A strategy based on C-MRI was highly effective for diagnosing stable CAD, but its high cost resulted in unfavourable incremental cost-effectiveness (ICER) in moderate-risk and high-risk scenarios. Non-invasive strategies based on SPECT have been dominated. An anatomical diagnostic strategy based on CTA is a cost-effective option for CAD diagnosis. Functional strategies performed equally well when based on ECHO. C-MRI yielded acceptable ICER only at low pretest probability, and SPECT was not cost-effective in our analysis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Heterotypic antibodies in Liberian sera causing anomalous reactions when using a commercial haemagglutination test for hepatitis-B surface antigen.

    PubMed

    Willcox, M C

    1976-04-01

    Agglutinins reacting with normal and tanned sheep erythrocytes were the probable cause of false positive reactions given by 51 of 214 Liberian sera when using a commercial passive-haemagglutination test for hepatitis-B surface antigen. Absorption showed these agglutinins to be identical to those described earlier in Nigerian sera. Rheumatoid factor and anti-sheep-serum antibodies although present in 12 and five per cent respectively of all sera were not responsible for any false positive reactions. The practical conclusion is that such tests, based on sheep erythrocytes are unsuitable for screening this population.

  18. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems . II. Further results with application to a set of ALMA and ATCA data

    NASA Astrophysics Data System (ADS)

    Vio, R.; Vergès, C.; Andreani, P.

    2017-08-01

    The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.

  19. Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies

    PubMed Central

    Kuo, Chia-Ling; Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.

    2015-01-01

    Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease. PMID:25955023

  20. Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.

    PubMed

    Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V

    2015-01-01

    Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.

  1. False Memories for Affective Information in Schizophrenia.

    PubMed

    Fairfield, Beth; Altamura, Mario; Padalino, Flavia A; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola

    2016-01-01

    Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls ( p  < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories ( p  > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved.

  2. False Memories for Affective Information in Schizophrenia

    PubMed Central

    Fairfield, Beth; Altamura, Mario; Padalino, Flavia A.; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola

    2016-01-01

    Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls (p < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories (p > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved. PMID:27965600

  3. Kepler Reliability and Occurrence Rates

    NASA Astrophysics Data System (ADS)

    Bryson, Steve

    2016-10-01

    The Kepler mission has produced tables of exoplanet candidates (``KOI table''), as well as tables of transit detections (``TCE table''), hosted at the Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu). Transit detections in the TCE table that are plausibly due to a transiting object are selected for inclusion in the KOI table. KOI table entries that have not been identified as false positives (FPs) or false alarms (FAs) are classified as planet candidates (PCs, Mullally et al. 2015). A subset of PCs have been confirmed as planetary transits with greater than 99% probability, but most PCs have <99% probability of being true planets. The fraction of PCs that are true transiting planets is the PC reliability rate. The overall PC population is believed to have a reliability rate >90% (Morton & Johnson 2011).

  4. Do juries meet our expectations?

    PubMed

    Arkes, Hal R; Mellers, Barbara A

    2002-12-01

    Surveys of public opinion indicate that people have high expectations for juries. When it comes to serious crimes, most people want errors of convicting the innocent (false positives) or acquitting the guilty (false negatives) to fall well below 10%. Using expected utility theory, Bayes' Theorem, signal detection theory, and empirical evidence from detection studies of medical decision making, eyewitness testimony, and weather forecasting, we argue that the frequency of mistakes probably far exceeds these "tolerable" levels. We are not arguing against the use of juries. Rather, we point out that a closer look at jury decisions reveals a serious gap between what we expect from juries and what probably occurs. When deciding issues of guilt and/or punishing convicted criminals, we as a society should recognize and acknowledge the abundance of error.

  5. False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis: an empirical review.

    PubMed

    Imberger, Georgina; Thorlund, Kristian; Gluud, Christian; Wetterslev, Jørn

    2016-08-12

    Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Use of Breast Cancer Screening and Its Association with Later Use of Preventive Services among Medicare Beneficiaries.

    PubMed

    Kang, Stella K; Jiang, Miao; Duszak, Richard; Heller, Samantha L; Hughes, Danny R; Moy, Linda

    2018-06-05

    Purpose To retrospectively assess whether there is an association between screening mammography and the use of a variety of preventive services in women who are enrolled in Medicare. Materials and Methods U.S. Medicare claims from 2010 to 2014 Research Identifiable Files were reviewed to retrospectively identify a group of women who underwent screening mammography and a control group without screening mammography in 2012. The screened group was divided into positive versus negative results at screening, and the positive subgroup was divided into false-positive and true-positive findings. Multivariate logistic regression models and inverse probability of treatment weighting were used to examine the relationship between screening status and the probabilities of undergoing Papanicolaou test, bone mass measurement, or influenza vaccination in the following 2 years. Results The cohort consisted of 555 705 patients, of whom 185 625 (33.4%) underwent mammography. After adjusting for patient demographics, comorbidities, geographic covariates, and baseline preventive care, women who underwent index screening mammography (with either positive or negative results) were more likely than unscreened women to later undergo Papanicolaou test (odds ratio [OR], 1.49; 95% confidence interval: 1.40, 1.58), bone mass measurement (OR, 1.70; 95% confidence interval: 1.63, 1.78), and influenza vaccine (OR, 1.45; 95% confidence interval: 1.37, 1.53). In women who had not undergone these preventive measures in the 2 years before screening mammography, use of these three services after false-positive findings at screening was no different than after true-negative findings at screening. Conclusion In beneficiaries of U.S. Medicare, use of screening mammography was associated with higher likelihood of adherence to other preventive guidelines, without a negative association between false-positive results and cervical cancer screening. © RSNA, 2018 Online supplemental material is available for this article.

  7. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

    2016-01-01

    The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

  8. Retrospective Comparison of Cardiac Testing and Results on Inpatients with Low Pretest Probability Compared with Moderate/High Pretest Probability for Coronary Artery Disease.

    PubMed

    Lear, Aaron; Huber, Merritt; Canada, Amy; Robertson, Jessica; Bosman, Evan; Zyzanski, Stephen

    2018-01-01

    To determine whether admission, and provocative stress testing of patients who have ruled out for acute coronary syndrome put patients with low-risk category for coronary artery disease (CAD) at risk for false-positive provocative stress testing and unnecessary coronary angiogram/imaging. A retrospective chart review was performed on patients between 30 and 70 years old, with no pre-existing diagnosis of CAD, admitted to observation or inpatient status chest pain or related complaints. Included patients were categorized based on Duke Clinical Score for pretest probability for CAD into either low-risk group, or moderate/high-risk group. The inpatient course was compared including whether provocative stress testing was performed; results of stress testing; whether patients underwent further coronary imaging; and what the results of the further imaging showed. 543 patients were eligible: 305 low pretest probability, and 238 moderate/high pretest probability. No difference was found in rate of stress testing relative risk (RR) = 1.01 (95% CI, 0.852 to 1.192; P = 0); rate of positive or equivocal stress tests between the 2 groups: RR = 0.653 (95% CI, 0.415 to 1.028; P = .07,). Low-pretest-probability patients had a lower likelihood of positive coronary imaging after stress test, RR = 0.061 (95% CI, 0.004 to 0.957; P = .001). Follow-up provocative testing of all patients admitted/observed after emergency department presentation with chest pain is unlikely to find CAD in patients with low pretest probability. Testing all low-probability patients puts them at increased risk for unnecessary invasive confirmatory testing. Further prospective testing is needed to confirm these retrospective results. © Copyright 2018 by the American Board of Family Medicine.

  9. Clinically relevant hypoglycemia prediction metrics for event mitigation.

    PubMed

    Harvey, Rebecca A; Dassau, Eyal; Zisser, Howard C; Bevier, Wendy; Seborg, Dale E; Jovanovič, Lois; Doyle, Francis J

    2012-08-01

    The purpose of this study was to develop a method to compare hypoglycemia prediction algorithms and choose parameter settings for different applications, such as triggering insulin pump suspension or alerting for rescue carbohydrate treatment. Hypoglycemia prediction algorithms with different parameter settings were implemented on an ambulatory dataset containing 490 days from 30 subjects with type 1 diabetes mellitus using the Dexcom™ (San Diego, CA) SEVEN™ continuous glucose monitoring system. The performance was evaluated using a proposed set of metrics representing the true-positive ratio, false-positive rate, and distribution of warning times. A prospective, in silico study was performed to show the effect of using different parameter settings to prevent or rescue from hypoglycemia. The retrospective study results suggest the parameter settings for different methods of hypoglycemia mitigation. When rescue carbohydrates are used, a high true-positive ratio, a minimal false-positive rate, and alarms with short warning time are desired. These objectives were met with a 30-min prediction horizon and two successive flags required to alarm: 78% of events were detected with 3.0 false alarms/day and 66% probability of alarms occurring within 30 min of the event. This parameter setting selection was confirmed in silico: treating with rescue carbohydrates reduced the duration of hypoglycemia from 14.9% to 0.5%. However, for a different method, such as pump suspension, this parameter setting only reduced hypoglycemia to 8.7%, as can be expected by the low probability of alarming more than 30 min ahead. The proposed metrics allow direct comparison of hypoglycemia prediction algorithms and selection of parameter settings for different types of hypoglycemia mitigation, as shown in the prospective in silico study in which hypoglycemia was alerted or treated with rescue carbohydrates.

  10. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  11. False-positive liver scan in a patient with hepatic amyloidosis: case report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, K.; Okuda, K.; Yoshida, T.

    1976-01-01

    A case of secondary hepatic amyloidosis exhibiting a large liver and multiple defects on the $sup 198$Au-radiocolloid scintigraph is presented. Biopsy and angiographic studies indicated that the areas of reduced colloid uptake represented heavy amyloid deposition, and the area of the left lobe with contrasting high activity most probably represented compensatory hypertrophy. (auth)

  12. Evaluation of microtiter-plate enzyme-linked immunosorbent assay for the analysis of triazine and chloroacetanilide herbicides in rainfall

    USGS Publications Warehouse

    Pomes, M.L.; Thurman, E.M.; Aga, D.S.; Goolsby, D.A.

    1998-01-01

    Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false- positive rate of 11.8% and the chloroacetanilide ELISA yielded a false- negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false-positive rate of 11.8% and the chloroacetanilide ELISA yielded a false-negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.

  13. Enhancing Entity Level Knowledge Representation and Environmental Sensing in COMBATXXI Using Unmanned Aircraft Systems

    DTIC Science & Technology

    2013-09-01

    sensor to the IED (r) is 110 meters and probability 0 if r is 220 meters or greater. Time to clear an IED is 60 minutes. 51 The factors affecting ... Factors such as False Positive and False Negative allows us to see how an operator’s skill level can affect the outcome of a tactical convoy...operator. The performance of a UAS in the field can vary greatly, and this per- formance can have an enormous affect on the outcome of a military operation

  14. The probability of object-scene co-occurrence influences object identification processes.

    PubMed

    Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B

    2017-07-01

    Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.

  15. Optical detection of chemical warfare agents and toxic industrial chemicals

    NASA Astrophysics Data System (ADS)

    Webber, Michael E.; Pushkarsky, Michael B.; Patel, C. Kumar N.

    2004-12-01

    We present an analytical model evaluating the suitability of optical absorption based spectroscopic techniques for detection of chemical warfare agents (CWAs) and toxic industrial chemicals (TICs) in ambient air. The sensor performance is modeled by simulating absorption spectra of a sample containing both the target and multitude of interfering species as well as an appropriate stochastic noise and determining the target concentrations from the simulated spectra via a least square fit (LSF) algorithm. The distribution of the LSF target concentrations determines the sensor sensitivity, probability of false positives (PFP) and probability of false negatives (PFN). The model was applied to CO2 laser based photoacosutic (L-PAS) CWA sensor and predicted single digit ppb sensitivity with very low PFP rates in the presence of significant amount of interferences. This approach will be useful for assessing sensor performance by developers and users alike; it also provides methodology for inter-comparison of different sensing technologies.

  16. Bone scintigraphy for neonatal osteomyelitis: simulation by extravasation of intravenous calcium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsam, D.; Goldfarb, C.R.; Stringer, B.

    Intravenously administered calcium gluconate has become increasingly popular in the treatment of neonatal tetany. Occasionally, extravasation results in cellulitis, leading to a clinical diagnosis of superimposed osteomyelitis. Osseous scintigraphy, as the accepted modality in the early detection of osteomyelitis, would tend to be used in this circumstance. This case illustrates a false-positive result, probably due to soft-tissue calcification.

  17. Common pitfalls in statistical analysis: The perils of multiple testing

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  18. Six years' experience of using the BacT/ALERT system to screen all platelet concentrates, and additional testing of outdated platelet concentrates to estimate the frequency of false-negative results.

    PubMed

    Larsen, C P; Ezligini, F; Hermansen, N O; Kjeldsen-Kragh, J

    2005-02-01

    Approximately 1 in every 2000 units of platelets is contaminated with bacteria. The BacT/ALERT automated blood culture system can be used to screen platelet concentrates (PCs) for bacterial contamination. Data were collected from May 1998 until May 2004. The number of PCs tested during this period was 36 896, most of which were produced from pools of four buffy-coats. On the day following blood collection or platelet apheresis, a 5-10 ml sample of the PC was aseptically transferred to a BacT/ALERT culture bottle for detection of aerobic bacteria. The sample was monitored for bacterial growth during the entire storage period of the PC (6.5 days). When a positive signal was generated, the culture bottle, the PC and the erythrocyte concentrates were tested for bacterial growth. In order to determine the frequency of false-negative BacT/ALERT signals, 1061 outdated PCs were tested during the period from May 2002 to May 2004. Eighty-eight positive signals were detected by the BacT/ALERT system, of which 12 were interpreted as truly positive. Fourteen signals were interpreted as truly false positive. Thirty-three signals were interpreted to be probably false positive. Two of 1061 outdated units tested positive, and Bacillus spp. and Staphylococcus epidermidis, respectively, were isolated from these PCs. Between 0.03% and 0.12% of the PCs were contaminated with bacteria. BacT/ALERT is an efficient tool for monitoring PCs for bacterial contamination; however, it is important to realize that false-negative results may occur.

  19. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  20. Quality control in the diagnosis of Trichuris trichiura and Ascaris lumbricoides using the Kato-Katz technique: experience from three randomised controlled trials.

    PubMed

    Speich, Benjamin; Ali, Said M; Ame, Shaali M; Albonico, Marco; Utzinger, Jürg; Keiser, Jennifer

    2015-02-05

    An accurate diagnosis of soil-transmitted helminthiasis is important for individual patient management, for drug efficacy evaluation and for monitoring control programmes. The Kato-Katz technique is the most widely used method detecting soil-transmitted helminth eggs in faecal samples. However, detailed analyses of quality control, including false-positive and faecal egg count (FEC) estimates, have received little attention. Over a 3-year period, within the frame of a series of randomised controlled trials conducted in Pemba, United Republic of Tanzania, 10% of randomly selected Kato-Katz thick smears were re-read for Trichuris trichiura and Ascaris lumbricoides eggs. In case of discordant result (i.e. positive versus negative) the slides were re-examined a third time. A result was assumed to be false-positive or false-negative if the result from the initial reading did not agree with the quality control as well as the third reading. We also evaluated the general agreement in FECs between the first and second reading, according to internal and World Health Organization (WHO) guidelines. From the 1,445 Kato-Katz thick smears subjected to quality control, 1,181 (81.7%) were positive for T. trichiura and 290 (20.1%) were positive for A. lumbricoides. During quality control, very low rates of false-positive results were observed; 0.35% (n = 5) for T. trichiura and 0.28% (n = 4) for A. lumbricoides. False-negative readings of Kato-Katz thick smears were obtained in 28 (1.94%) and 6 (0.42%) instances for T. trichiura and A. lumbricoides, respectively. A high frequency of discordant results in FECs was observed (i.e. 10.0-23.9% for T. trichiura, and 9.0-11.4% for A. lumbricoides). Our analyses show that the rate of false-positive diagnoses of soil-transmitted helminths is low. As the probability of false-positive results increases after examination of multiple stool samples from a single individual, the potential influence of false-positive results on epidemiological studies and anthelminthic drug efficacy studies should be determined. Existing WHO guidelines for quality control might be overambitious and might have to be revised, specifically with regard to handling disagreements in FECs.

  1. Is MMTV associated with human breast cancer? Maybe, but probably not.

    PubMed

    Perzova, Raisa; Abbott, Lynn; Benz, Patricia; Landas, Steve; Khan, Seema; Glaser, Jordan; Cunningham, Coleen K; Poiesz, Bernard

    2017-10-13

    Conflicting results regarding the association of MMTV with human breast cancer have been reported. Published sequence data have indicated unique MMTV strains in some human samples. However, concerns regarding contamination as a cause of false positive results have persisted. We performed PCR assays for MMTV on human breast cancer cell lines and fresh frozen and formalin fixed normal and malignant human breast epithelial samples. Assays were also performed on peripheral blood mononuclear cells from volunteer blood donors and subjects at risk for human retroviral infections. In addition, assays were performed on DNA samples from wild and laboratory mice. Sequencing of MMTV positive samples from both humans and mice were performed and phylogenetically compared. Using PCR under rigorous conditions to prevent and detect "carryover" contamination, we did detect MMTV DNA in human samples, including breast cancer. However, the results were not consistent and seemed to be an artifact. Further, experiments indicated that the probable source of false positives was murine DNA, containing endogenous MMTV, present in our building. However, comparison of published and, herein, newly described MMTV sequences with published data, indicates that there are some very unique human MMTV sequences in the literature. While we could not confirm the true presence of MMTV in our human breast cancer subjects, the data indicate that further, perhaps more traditional, retroviral studies are warranted to ascertain whether MMTV might rarely be the cause of human breast cancer.

  2. Automated segmentation of ultrasonic breast lesions using statistical texture classification and active contour based on probability distance.

    PubMed

    Liu, Bo; Cheng, H D; Huang, Jianhua; Tian, Jiawei; Liu, Jiafeng; Tang, Xianglong

    2009-08-01

    Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically.

  3. Exoplanet Biosignatures: A Framework for Their Assessment.

    PubMed

    Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn

    2018-04-20

    Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.

  4. Foraging Parameters Influencing the Detection and Interpretation of Area-Restricted Search Behaviour in Marine Predators: A Case Study with the Masked Booby

    PubMed Central

    Sommerfeld, Julia; Kato, Akiko; Ropert-Coudert, Yan; Garthe, Stefan; Hindell, Mark A.

    2013-01-01

    Identification of Area-restricted search (ARS) behaviour is used to better understand foraging movements and strategies of marine predators. Track-based descriptive analyses are commonly used to detect ARS behaviour, but they may be biased by factors such as foraging trip duration or non-foraging behaviours (i.e. resting on the water). Using first-passage time analysis we tested if (I) daylight resting at the sea surface positions falsely increase the detection of ARS behaviour and (II) short foraging trips are less likely to include ARS behaviour in Masked Boobies Sula dactylatra. We further analysed whether ARS behaviour may be used as a proxy to identify important feeding areas. Depth-acceleration and GPS-loggers were simultaneously deployed on chick-rearing adults to obtain (1) location data every 4 minutes and (2) detailed foraging activity such as diving rates, time spent sitting on the water surface and in flight. In 82% of 50 foraging trips, birds adopted ARS behaviour. In 19.3% of 57 detected ARS zones, birds spent more than 70% of total ARS duration resting on the water, suggesting that these ARS zones were falsely detected. Based on generalized linear mixed models, the probability of detecting false ARS zones was 80%. False ARS zones mostly occurred during short trips in close proximity to the colony, with low or no diving activity. This demonstrates the need to account for resting on the water surface positions in marine animals when determining ARS behaviour based on foraging locations. Dive rates were positively correlated with trip duration and the probability of ARS behaviour increased with increasing number of dives, suggesting that the adoption of ARS behaviour in Masked Boobies is linked to enhanced foraging activity. We conclude that ARS behaviour may be used as a proxy to identify important feeding areas in this species. PMID:23717471

  5. Foraging parameters influencing the detection and interpretation of area-restricted search behaviour in marine predators: a case study with the masked booby.

    PubMed

    Sommerfeld, Julia; Kato, Akiko; Ropert-Coudert, Yan; Garthe, Stefan; Hindell, Mark A

    2013-01-01

    Identification of Area-restricted search (ARS) behaviour is used to better understand foraging movements and strategies of marine predators. Track-based descriptive analyses are commonly used to detect ARS behaviour, but they may be biased by factors such as foraging trip duration or non-foraging behaviours (i.e. resting on the water). Using first-passage time analysis we tested if (I) daylight resting at the sea surface positions falsely increase the detection of ARS behaviour and (II) short foraging trips are less likely to include ARS behaviour in Masked Boobies Sula dactylatra. We further analysed whether ARS behaviour may be used as a proxy to identify important feeding areas. Depth-acceleration and GPS-loggers were simultaneously deployed on chick-rearing adults to obtain (1) location data every 4 minutes and (2) detailed foraging activity such as diving rates, time spent sitting on the water surface and in flight. In 82% of 50 foraging trips, birds adopted ARS behaviour. In 19.3% of 57 detected ARS zones, birds spent more than 70% of total ARS duration resting on the water, suggesting that these ARS zones were falsely detected. Based on generalized linear mixed models, the probability of detecting false ARS zones was 80%. False ARS zones mostly occurred during short trips in close proximity to the colony, with low or no diving activity. This demonstrates the need to account for resting on the water surface positions in marine animals when determining ARS behaviour based on foraging locations. Dive rates were positively correlated with trip duration and the probability of ARS behaviour increased with increasing number of dives, suggesting that the adoption of ARS behaviour in Masked Boobies is linked to enhanced foraging activity. We conclude that ARS behaviour may be used as a proxy to identify important feeding areas in this species.

  6. Hemorrhoids detected at colonoscopy: an infrequent cause of false-positive fecal immunochemical test results.

    PubMed

    van Turenhout, Sietze T; Oort, Frank A; Terhaar sive Droste, Jochim S; Coupé, Veerle M H; van der Hulst, Rene W; Loffeld, Ruud J; Scholten, Pieter; Depla, Annekatrien C T M; Bouman, Anneke A; Meijer, Gerrit A; Mulder, Chris J J; van Rossum, Leo G M

    2012-07-01

    Colorectal cancer screening by fecal immunochemical tests (FITs) is hampered by frequent false-positive (FP) results and thereby the risk of complications and strain on colonoscopy capacity. Hemorrhoids might be a plausible cause of FP results. To determine the contribution of hemorrhoids to the frequency of FP FIT results. Retrospective analysis from prospective cohort study. Five large teaching hospitals, including 1 academic hospital. All subjects scheduled for elective colonoscopy. FIT before bowel preparation. Frequency of FP FIT results in subjects with hemorrhoids as the only relevant abnormality compared with FP FIT results in subjects with no relevant abnormalities. Logistic regression analysis to determine colonic abnormalities influencing FP results. In 2855 patients, 434 had positive FIT results: 213 had advanced neoplasia and 221 had FP results. In 9 individuals (4.1%; 95% CI, 1.4-6.8) with an FP FIT result, hemorrhoids were the only abnormality. In univariate unadjusted analysis, subjects with hemorrhoids as the only abnormality did not have more positive results (9/134; 6.7%) compared with subjects without any abnormalities (43/886; 4.9%; P = .396). Logistic regression identified hemorrhoids, nonadvanced polyps, and a group of miscellaneous abnormalities, all significantly influencing false positivity. Of 1000 subjects with hemorrhoids, 67 would have FP results, of whom 18 would have FP results because of hemorrhoids only. Potential underreporting of hemorrhoids; high-risk individuals. Hemorrhoids in individuals participating in colorectal cancer screening will probably not lead to a substantial number of false-positive test results. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  7. Impact of Dengue Vaccination on Serological Diagnosis: Insights From Phase III Dengue Vaccine Efficacy Trials

    PubMed Central

    Plennevaux, Eric; Moureau, Annick; Arredondo-García, José L; Villar, Luis; Pitisuttithum, Punnee; Tran, Ngoc H; Bonaparte, Matthew; Chansinghakul, Danaya; Coronel, Diana L; L’Azou, Maïna; Ochiai, R Leon; Toh, Myew-Ling; Noriega, Fernando; Bouckenooghe, Alain

    2018-01-01

    Abstract Background We previously reported that vaccination with the tetravalent dengue vaccine (CYD-TDV; Dengvaxia) may bias the diagnosis of dengue based on immunoglobulin M (IgM) and immunoglobulin G (IgG) assessments. Methods We undertook a post hoc pooled analysis of febrile episodes that occurred during the active surveillance phase (the 25 months after the first study injection) of 2 pivotal phase III, placebo-controlled CYD-TDV efficacy studies that involved ≥31000 children aged 2–16 years across 10 countries in Asia and Latin America. Virologically confirmed dengue (VCD) episode was defined with a positive test for dengue nonstructural protein 1 antigen or dengue polymerase chain reaction. Probable dengue episode was serologically defined as (1) IgM-positive acute- or convalescent-phase sample, or (2) IgG-positive acute-phase sample and ≥4-fold IgG increase between acute- and convalescent-phase samples. Results There were 1284 VCD episodes (575 and 709 in the CYD-TDV and placebo groups, respectively) and 17673 other febrile episodes (11668 and 6005, respectively). Compared with VCD, the sensitivity and specificity of probable dengue definition were 93.1% and 77.2%, respectively. Overall positive and negative predictive values were 22.9% and 99.5%, respectively, reflecting the much lower probability of correctly confirming probable dengue in a population including a vaccinated cohort. Vaccination-induced bias toward false-positive diagnosis was more pronounced among individuals seronegative at baseline. Conclusions Caution will be required when interpreting IgM and IgG data obtained during routine surveillance in those vaccinated with CYD-TDV. There is an urgent need for new practical, dengue-specific diagnostic algorithms now that CYD-TDV is approved in a number of dengue-endemic countries. Clinical Trials Registration NCT01373281 and NCT01374516. PMID:29300876

  8. Impact of Dengue Vaccination on Serological Diagnosis: Insights From Phase III Dengue Vaccine Efficacy Trials.

    PubMed

    Plennevaux, Eric; Moureau, Annick; Arredondo-García, José L; Villar, Luis; Pitisuttithum, Punnee; Tran, Ngoc H; Bonaparte, Matthew; Chansinghakul, Danaya; Coronel, Diana L; L'Azou, Maïna; Ochiai, R Leon; Toh, Myew-Ling; Noriega, Fernando; Bouckenooghe, Alain

    2018-04-03

    We previously reported that vaccination with the tetravalent dengue vaccine (CYD-TDV; Dengvaxia) may bias the diagnosis of dengue based on immunoglobulin M (IgM) and immunoglobulin G (IgG) assessments. We undertook a post hoc pooled analysis of febrile episodes that occurred during the active surveillance phase (the 25 months after the first study injection) of 2 pivotal phase III, placebo-controlled CYD-TDV efficacy studies that involved ≥31000 children aged 2-16 years across 10 countries in Asia and Latin America. Virologically confirmed dengue (VCD) episode was defined with a positive test for dengue nonstructural protein 1 antigen or dengue polymerase chain reaction. Probable dengue episode was serologically defined as (1) IgM-positive acute- or convalescent-phase sample, or (2) IgG-positive acute-phase sample and ≥4-fold IgG increase between acute- and convalescent-phase samples. There were 1284 VCD episodes (575 and 709 in the CYD-TDV and placebo groups, respectively) and 17673 other febrile episodes (11668 and 6005, respectively). Compared with VCD, the sensitivity and specificity of probable dengue definition were 93.1% and 77.2%, respectively. Overall positive and negative predictive values were 22.9% and 99.5%, respectively, reflecting the much lower probability of correctly confirming probable dengue in a population including a vaccinated cohort. Vaccination-induced bias toward false-positive diagnosis was more pronounced among individuals seronegative at baseline. Caution will be required when interpreting IgM and IgG data obtained during routine surveillance in those vaccinated with CYD-TDV. There is an urgent need for new practical, dengue-specific diagnostic algorithms now that CYD-TDV is approved in a number of dengue-endemic countries. NCT01373281 and NCT01374516.

  9. Error probability for RFID SAW tags with pulse position coding and peak-pulse detection.

    PubMed

    Shmaliy, Yuriy S; Plessky, Victor; Cerda-Villafaña, Gustavo; Ibarra-Manzano, Oscar

    2012-11-01

    This paper addresses the code reading error probability (EP) in radio-frequency identification (RFID) SAW tags with pulse position coding (PPC) and peak-pulse detection. EP is found in a most general form, assuming M groups of codes with N slots each and allowing individual SNRs in each slot. The basic case of zero signal in all off-pulses and equal signals in all on-pulses is investigated in detail. We show that if a SAW-tag with PPC is designed such that the spurious responses are attenuated by more than 20 dB below on-pulses, then EP can be achieved at the level of 10(-8) (one false read per 108 readings) with SNR >17 dB for any reasonable M and N. The tag reader range is estimated as a function of the transmitted power and EP.

  10. Extensive testing or focused testing of patients with elevated liver enzymes.

    PubMed

    Tapper, Elliot B; Saini, Sameer D; Sengupta, Neil

    2017-02-01

    Many patients have elevated serum aminotransferases reflecting many underlying conditions, both common and rare. Clinicians generally apply one of two evaluative strategies: testing for all diseases at once (extensive) or just common diseases first (focused). We simulated the evaluation of 10,000 adult outpatients with elevated with alanine aminotransferase to compare both testing strategies. Model inputs employed population-based data from the US (National Health and Nutrition Examination Survey) and Britain (Birmingham and Lambeth Liver Evaluation Testing Strategies). Patients were followed until a diagnosis was provided or a diagnostic liver biopsy was considered. The primary outcome was US dollars per diagnosis. Secondary outcomes included doctor visits per diagnosis, false-positives per diagnosis and confirmatory liver biopsies ordered. The extensive testing strategy required the lowest monetary cost, yielding diagnoses for 54% of patients at $448/patient compared to 53% for $502 under the focused strategy. The extensive strategy also required fewer doctor visits (1.35 vs. 1.61 visits/patient). However, the focused strategy generated fewer false-positives (0.1 vs. 0.19/patient) and more biopsies (0.04 vs. 0.08/patient). Focused testing becomes the most cost-effective strategy when accounting for pre-test probabilities and prior evaluations performed. This includes when the respective prevalence of alcoholic, non-alcoholic and drug-induced liver disease exceeds 51.1%, 53.0% and 13.0%. Focused testing is also the most cost-effective strategy in the referral setting where assessments for viral hepatitis, alcoholic and non-alcoholic fatty liver disease have already been performed. Testing for elevated liver enzymes should be deliberate and focused to account for pre-test probabilities if possible. Many patients have elevated liver enzymes reflecting one of many possible liver diseases, some of which are very common and some of which are rare. Tests are widely available for most causes but it is unclear whether clinicians should order them all at once or direct testing based on how likely a given disease may be given the patient's history and physical exam. The tradeoffs of both approaches involve the money spent on testing, number of office visits needed, and false positive results generated. This study shows that if there are no clues available at the time of evaluation, testing all at once saves time and money while causing more false positives. However, if there are strong clues regarding the likelihood of a particular disease, limited testing saves time, money and prevents false positives. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  11. Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the ion-signature prior to the Kobe earthquake and for financial markets.

  12. Decision curve analysis: a novel method for evaluating prediction models.

    PubMed

    Vickers, Andrew J; Elkin, Elena B

    2006-01-01

    Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.

  13. Terahertz Imaging and Backscatter Radiography Probability of Detection Study for Space Shuttle Foam Inspections

    NASA Technical Reports Server (NTRS)

    Ussery, Warren; Johnson, Kenneth; Walker, James; Rummel, Ward

    2008-01-01

    This slide presentation reviews the use of terahertz imaging and Backscatter Radiography in a probability of detection study of the foam on the external tank (ET) shedding and damaging the shuttle orbiter. Non-destructive Examination (NDE) is performed as one method of preventing critical foam debris during the launch. Conventional NDE methods for inspection of the foam are assessed and the deficiencies are reviewed. Two methods for NDE inspection are reviewed: Backscatter Radiography (BSX) and Terahertz (THZ) Imaging. The purpose of the Probability of Detection (POD) study was to assess performance and reliability of the use of BSX and or THZ as an appropriate NDE method. The study used a test article with inserted defects, and a sample of blanks included to test for false positives. The results of the POD study are reported.

  14. Probability of a false-negative HIV antibody test result during the window period: a tool for pre- and post-test counselling.

    PubMed

    Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina

    2015-03-01

    Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Statistical parsimony networks and species assemblages in Cephalotrichid nemerteans (nemertea).

    PubMed

    Chen, Haixia; Strand, Malin; Norenburg, Jon L; Sun, Shichun; Kajihara, Hiroshi; Chernyshev, Alexey V; Maslakova, Svetlana A; Sundberg, Per

    2010-09-21

    It has been suggested that statistical parsimony network analysis could be used to get an indication of species represented in a set of nucleotide data, and the approach has been used to discuss species boundaries in some taxa. Based on 635 base pairs of the mitochondrial protein-coding gene cytochrome c oxidase I (COI), we analyzed 152 nemertean specimens using statistical parsimony network analysis with the connection probability set to 95%. The analysis revealed 15 distinct networks together with seven singletons. Statistical parsimony yielded three networks supporting the species status of Cephalothrix rufifrons, C. major and C. spiralis as they currently have been delineated by morphological characters and geographical location. Many other networks contained haplotypes from nearby geographical locations. Cladistic structure by maximum likelihood analysis overall supported the network analysis, but indicated a false positive result where subnetworks should have been connected into one network/species. This probably is caused by undersampling of the intraspecific haplotype diversity. Statistical parsimony network analysis provides a rapid and useful tool for detecting possible undescribed/cryptic species among cephalotrichid nemerteans based on COI gene. It should be combined with phylogenetic analysis to get indications of false positive results, i.e., subnetworks that would have been connected with more extensive haplotype sampling.

  17. On the feasibility of linking census samples to the National Death Index for epidemiologic studies: a progress report.

    PubMed Central

    Rogot, E; Feinleib, M; Ockay, K A; Schwartz, S H; Bilgrad, R; Patterson, J E

    1983-01-01

    To test the feasibility of using large national probability samples provided by the US Census Bureau, a pilot project was initiated to link 230,000 Census-type records to the National Death Index (NDI). Using strict precautions to maintain the complete confidentiality of individual records, the Current Population Survey files of one month in 1973 and one month in 1978 were matched by computer to the 1979 NDI file. The basic question to be addressed was whether deaths so obtained are seriously underestimated when there is no Social Security Number (SSN) in the Census record. The search of the NDI file resulted in 5,542 matches of which about 1,800 appear to be "true positives" representing deaths, the remainder are "false positives." Of the deaths, 80 per cent would still have been detected without SSN in the Census record. The main reasons for missing deaths (false negatives) were discrepancies in the year of birth and in the given name. Assuming certain changes in the NDI matching algorithm, the 80 per cent figure could increase to 85 per cent or higher; however, this could also cause significant increases in the number of false positives. The National Heart, Lung and Blood Institute (NHLBI) and Census Bureau staff are currently developing a probabilistic method to eliminate false positives from the NDI output tape. The results of the pilot study indicate that a larger research project is clearly feasible. PMID:6625029

  18. Visual representation of statistical information improves diagnostic inferences in doctors and their patients.

    PubMed

    Garcia-Retamero, Rocio; Hoffrage, Ulrich

    2013-04-01

    Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Kim, Kyungmin; Harry, Ian W.; Hodge, Kari A.; Kim, Young-Min; Lee, Chang-Hwan; Lee, Hyun Kyu; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.

    2015-12-01

    We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%-14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs.

  20. Objective assessment of plaster cast quality in pediatric distal forearm fractures: Is there an optimal index?

    PubMed

    Labronici, Pedro José; Ferreira, Leonardo Termis; Dos Santos Filho, Fernando Claudino; Pires, Robinson Esteves Santos; Gomes, Davi Coutinho Fonseca Fernandes; da Silva, Luiz Henrique Penteado; Gameiro, Vinicius Schott

    2017-02-01

    Several so-called casting indices are available for objective evaluation of plaster cast quality. The present study sought to investigate four of these indices (gap index, padding index, Canterbury index, and three-point index) as compared to a reference standard (cast index) for evaluation of plaster cast quality after closed reduction of pediatric displaced distal forearm fractures. Forty-three radiographs from patients with displaced distal forearm fractures requiring manipulation were reviewed. Accuracy, sensitivity, specificity, false-positive probability, false-negative probability, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio were calculated for each of the tested indices. Comparison among indices revealed diagnostic agreement in only 4.7% of cases. The strongest correlation with the cast index was found for the gap index, with a Spearman correlation coefficient of 0.94. The gap index also displayed the best agreement with the cast index, with both indices yielding the same result in 79.1% of assessments. When seeking to assess plaster cast quality, the cast index and gap index should be calculated; if both indices agree, a decision on quality can be made. If the cast and gap indices disagree, the padding index can be calculated as a tiebreaker, and the decision based on the most frequent of the three results. Calculation of the three-point index and Canterbury index appears unnecessary. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. High-sensitivity, high-selectivity detection of chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Pushkarsky, Michael B.; Webber, Michael E.; Macdonald, Tyson; Patel, C. Kumar N.

    2006-01-01

    We report high-sensitivity detection of chemical warfare agents (nerve gases) with very low probability of false positives (PFP). We demonstrate a detection threshold of 1.2ppb (7.7μg/m3 equivalent of Sarin) with a PFP of <1:106 in the presence of many interfering gases present in an urban environment through the detection of diisopropyl methylphosphonate, an accepted relatively harmless surrogate for the nerve agents. For the current measurement time of ˜60s, a PFP of 1:106 corresponds to one false alarm approximately every 23months. The demonstrated performance satisfies most current homeland and military security requirements.

  2. Acoustic intrusion detection and positioning system

    NASA Astrophysics Data System (ADS)

    Berman, Ohad; Zalevsky, Zeev

    2002-08-01

    Acoustic sensors are becoming more and more applicable as a military battlefield technology. Those sensors allow a detection and direciton estimation with low false alarm rate and high probability of detection. The recent technological progress related to these fields of reserach, together with an evolution of sophisticated algorithms, allow the successful integration of those sensoe in battlefield technologies. In this paper the performances of an acoustic sensor for a detection of avionic vessels is investigated and analyzed.

  3. Performance of species occurrence estimators when basic assumptions are not met: a test using field data where true occupancy status is known

    USGS Publications Warehouse

    Miller, David A. W.; Bailey, Larissa L.; Grant, Evan H. Campbell; McClintock, Brett T.; Weir, Linda A.; Simons, Theodore R.

    2015-01-01

    Our results demonstrate that even small probabilities of misidentification and among-site detection heterogeneity can have severe effects on estimator reliability if ignored. We challenge researchers to place greater attention on both heterogeneity and false positives when designing and analysing occupancy studies. We provide 9 specific recommendations for the design, implementation and analysis of occupancy studies to better meet this challenge.

  4. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  5. The meaning of diagnostic test results: a spreadsheet for swift data analysis.

    PubMed

    Maceneaney, P M; Malone, D E

    2000-03-01

    To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. Microsoft Excel(TM)was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. This spreadsheet can be used on desktop and palmtop computers. The MS Excel(TM)version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. Copyright 2000 The Royal College of Radiologists.

  6. [The use of galactomannan detection in diagnosing invasive aspergillosis in hemato-oncological patients].

    PubMed

    Rácil, Z; Kocmanová, I; Wagnerová, B; Winterová, J; Lengerová, M; Moulis, M; Mayer, J

    2008-01-01

    PREMISES AND OBJECTIVES: Timely diagnosis is of critical importance for the prognosis of invasive aspergilosis (IA) patients. Over recent years, IA detection of galactomannan using the ELISA method has assumed growing importance. The objective of the study was to analyse the usability of the method in current clinical practice of a hemato-oncological ward. From May 2003 to October 2006, blood samples were taken from patients at IA risk to detect galactomannan (GM) in serum using the ELISA method. The patients who underwent the tests were classified by the probability of IA presence on the basis of the results of conventional diagnostic methods and section findings. A total of 11,360 serum samples from 911 adult patients were tested for GM presence. IA (probable/proven) was diagnosed in 42 (4.6%) of them. The rates of sensitivity, specificity, positive and negative predictive value of galactomannan detection for IA diagnosis in our ward were, respectively, 95.2%, 90.0%, 31.5% and 99.7%. The principal causes of the limited positive predictive value of the test were the high percentage of false-positive test results (mainly caused by concomitant administration of some penicillin antibiotics or Plasma-Lyte infusion solution), as well as the fact that a large percentage of patients we examined fell within the group of patients with hematological malignity with a very low prevalence of IA. GM detection in serum is associated with high sensitivity and excellent negative predictive value in IA diagnosis in hemato-oncological patients. Knowledge and elimination of possible causes of false-positive results as well as focusing the screening on patients at greatest risk of infection are necessary for an even better exploitation of the test.

  7. Determining open cluster membership. A Bayesian framework for quantitative member classification

    NASA Astrophysics Data System (ADS)

    Stott, Jonathan J.

    2018-01-01

    Aims: My goal is to develop a quantitative algorithm for assessing open cluster membership probabilities. The algorithm is designed to work with single-epoch observations. In its simplest form, only one set of program images and one set of reference images are required. Methods: The algorithm is based on a two-stage joint astrometric and photometric assessment of cluster membership probabilities. The probabilities were computed within a Bayesian framework using any available prior information. Where possible, the algorithm emphasizes simplicity over mathematical sophistication. Results: The algorithm was implemented and tested against three observational fields using published survey data. M 67 and NGC 654 were selected as cluster examples while a third, cluster-free, field was used for the final test data set. The algorithm shows good quantitative agreement with the existing surveys and has a false-positive rate significantly lower than the astrometric or photometric methods used individually.

  8. [Streptococcal pharyngitis: clinical suspicion versus diagnosis].

    PubMed

    Morais, Sofia; Teles, Andreia; Ramalheira, Elmano; Roseta, José

    2009-01-01

    Pharyngitis is a very prevalent illness in the ambulatory care setting. Its diagnosis is a challenge, especially in the differentiation between the viric and streptococcal causes. A formulary was made to register the clinical and laboratory data; a throat swab for culture was obtained from all the children who presented to the emergency department with sore throat and/or signs of pharyngitis/tonsillitis, for a period of three months (15th of April to 15th of July of 2006). The signs and symptoms, prescribed antibiotherapy and frequency of false diagnostics were evaluated and the clinical suspicion compared with the diagnosis by culture. 158 children were evaluated, with a median age of four years, with a male predominance (56%). The period that showed the greatest number of cases was the first fifteen days of May. Forty-three percent of the cultures were positive for Streptococcus pyogenes. The more frequent signs and symptoms in pharyngitis were pharyngeal erythema (98%), fever (86%) and sore throat (78%). A significative statistical difference was found for cough, scarlatiniform rash, tonsillar exudate, palatal petechiae and tonsillar swelling. Of the signs and symptoms studied, only three of them presented a positive predictive value superior to 50%: scarlatiniform rash (85%), palatal petechiae (63%) and cough (57%). The presence of tonsillar exudate had a positive predictive value for non-streptococcal pharyngitis of 70%. Fifty-three percent of the doctors considered streptococcal pharyngitis highly probable, and from this, 56% had a positive culture for Streptococcus. Those who considered a low probability, the culture was positive in 28%. There were 37% of false diagnosis. The distinction between streptococcal pharyngitis and non-streptococcal pharyngitis is not always correct when based on clinical characteristics. The use of diagnostic tests is important in order to avoid unnecessary antibiotherapy as well as to allow the correct use in the positive cases.

  9. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  10. Capturing the complexity of uncertainty language to maximise its use.

    NASA Astrophysics Data System (ADS)

    Juanchich, Marie; Sirota, Miroslav

    2016-04-01

    Uncertainty is often communicated verbally, using uncertainty phrases such as 'there is a small risk of earthquake', 'flooding is possible' or 'it is very likely the sea level will rise'. Prior research has only examined a limited number of properties of uncertainty phrases: mainly the probability conveyed (e.g., 'a small chance' convey a small probability whereas 'it is likely' convey a high probability). We propose a new analytical framework that captures more of the complexity of uncertainty phrases by studying their semantic, pragmatic and syntactic properties. Further, we argue that the complexity of uncertainty phrases is functional and can be leveraged to best describe uncertain outcomes and achieve the goals of speakers. We will present findings from a corpus study and an experiment where we assessed the following properties of uncertainty phrases: probability conveyed, subjectivity, valence, nature of the subject, grammatical category of the uncertainty quantifier and whether the quantifier elicits a positive or a negative framing. Natural language processing techniques applied to corpus data showed that people use a very large variety of uncertainty phrases representing different configurations of the properties of uncertainty phrases (e.g., phrases that convey different levels of subjectivity, phrases with different grammatical construction). In addition, the corpus analysis uncovered that uncertainty phrases commonly studied in psychology are not the most commonly used in real life. In the experiment we manipulated the amount of evidence indicating that a fact was true and whether the participant was required to prove the fact was true or that it was false. Participants produced a phrase to communicate the likelihood that the fact was true (e.g., 'it is not sure…', 'I am convinced that…'). The analyses of the uncertainty phrases produced showed that participants leveraged the properties of uncertainty phrases to reflect the strength of evidence but also to achieve their personal goals. For example, participants aiming to prove that the fact was true chose words that conveyed a more positive polarity and a higher probability than participants aiming to prove that the fact was false. We discuss the utility of the framework for harnessing the properties of uncertainty phrases in geosciences.

  11. Field Synopsis and Re-analysis of Systematic Meta-analyses of Genetic Association Studies in Multiple Sclerosis: a Bayesian Approach.

    PubMed

    Park, Jae Hyon; Kim, Joo Hi; Jo, Kye Eun; Na, Se Whan; Eisenhut, Michael; Kronbichler, Andreas; Lee, Keum Hwa; Shin, Jae Il

    2018-07-01

    To provide an up-to-date summary of multiple sclerosis-susceptible gene variants and assess the noteworthiness in hopes of finding true associations, we investigated the results of 44 meta-analyses on gene variants and multiple sclerosis published through December 2016. Out of 70 statistically significant genotype associations, roughly a fifth (21%) of the comparisons showed noteworthy false-positive rate probability (FPRP) at a statistical power to detect an OR of 1.5 and at a prior probability of 10 -6 assumed for a random single nucleotide polymorphism. These associations (IRF8/rs17445836, STAT3/rs744166, HLA/rs4959093, HLA/rs2647046, HLA/rs7382297, HLA/rs17421624, HLA/rs2517646, HLA/rs9261491, HLA/rs2857439, HLA/rs16896944, HLA/rs3132671, HLA/rs2857435, HLA/rs9261471, HLA/rs2523393, HLA-DRB1/rs3135388, RGS1/rs2760524, PTGER4/rs9292777) also showed a noteworthy Bayesian false discovery probability (BFDP) and one additional association (CD24 rs8734/rs52812045) was also noteworthy via BFDP computation. Herein, we have identified several noteworthy biomarkers of multiple sclerosis susceptibility. We hope these data are used to study multiple sclerosis genetics and inform future screening programs.

  12. Application of Reverse Transcriptase -PCR (RT-PCR) for rapid detection of viable Escherichia coli in drinking water samples.

    PubMed

    Molaee, Neda; Abtahi, Hamid; Ghannadzadeh, Mohammad Javad; Karimi, Masoude; Ghaznavi-Rad, Ehsanollah

    2015-01-01

    Polymerase chain reaction (PCR) is preferred to other methods for detecting Escherichia coli (E. coli) in water in terms of speed, accuracy and efficiency. False positive result is considered as the major disadvantages of PCR. For this reason, reverse transcriptase-polymerase chain reaction (RT-PCR) can be used to solve this problem. The aim of present study was to determine the efficiency of RT-PCR for rapid detection of viable Escherichia coli in drinking water samples and enhance its sensitivity through application of different filter membranes. Specific primers were designed for 16S rRNA and elongation Factor II genes. Different concentrations of bacteria were passed through FHLP and HAWP filters. Then, RT-PCR was performed using 16srRNA and EF -Tu primers. Contamination of 10 wells was determined by RT-PCR in Arak city. To evaluate RT-PCR efficiency, the results were compared with most probable number (MPN) method. RT-PCR is able to detect bacteria in different concentrations. Application of EF II primers reduced false positive results compared to 16S rRNA primers. The FHLP hydrophobic filters have higher ability to absorb bacteria compared with HAWB hydrophilic filters. So the use of hydrophobic filters will increase the sensitivity of RT-PCR. RT-PCR shows a higher sensitivity compared to conventional water contamination detection method. Unlike PCR, RT-PCR does not lead to false positive results. The use of EF-Tu primers can reduce the incidence of false positive results. Furthermore, hydrophobic filters have a higher ability to absorb bacteria compared to hydrophilic filters.

  13. Heidelberg Retina Tomography Analysis in Optic Disks with Anatomic Particularities

    PubMed Central

    Alexandrescu, C; Pascu, R; Ilinca, R; Popescu, V; Ciuluvica, R; Voinea, L; Celea, C

    2010-01-01

    Due to its objectivity, reproducibility and predictive value confirmed by many large scale statistical clinical studies, Heidelberg Retina Tomography has become one of the most used computerized image analysis of the optic disc in glaucoma. It has been signaled, though, that the diagnostic value of Moorfieds Regression Analyses and Glaucoma Probability Score decreases when analyzing optic discs with extreme sizes. The number of false positive results increases in cases of megalopapilllae and the number of false negative results increases in cases of small size optic discs. The present paper is a review of the aspects one should take into account when analyzing a HRT result of an optic disc with anatomic particularities. PMID:21254731

  14. Validity of Models for Predicting BRCA1 and BRCA2 Mutations

    PubMed Central

    Parmigiani, Giovanni; Chen, Sining; Iversen, Edwin S.; Friebel, Tara M.; Finkelstein, Dianne M.; Anton-Culver, Hoda; Ziogas, Argyrios; Weber, Barbara L.; Eisen, Andrea; Malone, Kathleen E.; Daling, Janet R.; Hsu, Li; Ostrander, Elaine A.; Peterson, Leif E.; Schildkraut, Joellen M.; Isaacs, Claudine; Corio, Camille; Leondaridis, Leoni; Tomlinson, Gail; Amos, Christopher I.; Strong, Louise C.; Berry, Donald A.; Weitzel, Jeffrey N.; Sand, Sharon; Dutson, Debra; Kerber, Rich; Peshkin, Beth N.; Euhus, David M.

    2008-01-01

    Background Deleterious mutations of the BRCA1 and BRCA2 genes confer susceptibility to breast and ovarian cancer. At least 7 models for estimating the probabilities of having a mutation are used widely in clinical and scientific activities; however, the merits and limitations of these models are not fully understood. Objective To systematically quantify the accuracy of the following publicly available models to predict mutation carrier status: BRCAPRO, family history assessment tool, Finnish, Myriad, National Cancer Institute, University of Pennsylvania, and Yale University. Design Cross-sectional validation study, using model predictions and BRCA1 or BRCA2 mutation status of patients different from those used to develop the models. Setting Multicenter study across Cancer Genetics Network participating centers. Patients 3 population-based samples of participants in research studies and 8 samples from genetic counseling clinics. Measurements Discrimination between individuals testing positive for a mutation in BRCA1 or BRCA2 from those testing negative, as measured by the c-statistic, and sensitivity and specificity of model predictions. Results The 7 models differ in their predictions. The better-performing models have a c-statistic around 80%. BRCAPRO has the largest c-statistic overall and in all but 2 patient subgroups, although the margin over other models is narrow in many strata. Outside of high-risk populations, all models have high false-negative and false-positive rates across a range of probability thresholds used to refer for mutation testing. Limitation Three recently published models were not included. Conclusions All models identify women who probably carry a deleterious mutation of BRCA1 or BRCA2 with adequate discrimination to support individualized genetic counseling, although discrimination varies across models and populations. PMID:17909205

  15. Robust Detection of Rare Species Using Environmental DNA: The Importance of Primer Specificity

    PubMed Central

    Wilcox, Taylor M.; McKelvey, Kevin S.; Young, Michael K.; Jane, Stephen F.; Lowe, Winsor H.; Whiteley, Andrew R.; Schwartz, Michael K.

    2013-01-01

    Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method’s sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design. PMID:23555689

  16. Robust detection of rare species using environmental DNA: the importance of primer specificity.

    PubMed

    Wilcox, Taylor M; McKelvey, Kevin S; Young, Michael K; Jane, Stephen F; Lowe, Winsor H; Whiteley, Andrew R; Schwartz, Michael K

    2013-01-01

    Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method's sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design.

  17. Optimum detection of tones transmitted by a spacecraft

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Shihabi, M. M.; Moon, T.

    1995-01-01

    The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.

  18. False-Positive Error Rates for Reliable Digit Span and Auditory Verbal Learning Test Performance Validity Measures in Amnestic Mild Cognitive Impairment and Early Alzheimer Disease.

    PubMed

    Loring, David W; Goldstein, Felicia C; Chen, Chuqing; Drane, Daniel L; Lah, James J; Zhao, Liping; Larrabee, Glenn J

    2016-06-01

    The objective is to examine failure on three embedded performance validity tests [Reliable Digit Span (RDS), Auditory Verbal Learning Test (AVLT) logistic regression, and AVLT recognition memory] in early Alzheimer disease (AD; n = 178), amnestic mild cognitive impairment (MCI; n = 365), and cognitively intact age-matched controls (n = 206). Neuropsychological tests scores were obtained from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI). RDS failure using a ≤7 RDS threshold was 60/178 (34%) for early AD, 52/365 (14%) for MCI, and 17/206 (8%) for controls. A ≤6 RDS criterion reduced this rate to 24/178 (13%) for early AD, 15/365 (4%) for MCI, and 7/206 (3%) for controls. AVLT logistic regression probability of ≥.76 yielded unacceptably high false-positive rates in both clinical groups [early AD = 149/178 (79%); MCI = 159/365 (44%)] but not cognitively intact controls (13/206, 6%). AVLT recognition criterion of ≤9/15 classified 125/178 (70%) of early AD, 155/365 (42%) of MCI, and 18/206 (9%) of control scores as invalid, which decreased to 66/178 (37%) for early AD, 46/365 (13%) for MCI, and 10/206 (5%) for controls when applying a ≤5/15 criterion. Despite high false-positive rates across individual measures and thresholds, combining RDS ≤ 6 and AVLT recognition ≤9/15 classified only 9/178 (5%) of early AD and 4/365 (1%) of MCI patients as invalid performers. Embedded validity cutoffs derived from mixed clinical groups produce unacceptably high false-positive rates in MCI and early AD. Combining embedded PVT indicators lowers the false-positive rate. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. False-Positive Head-Impulse Test in Cerebellar Ataxia

    PubMed Central

    Kremmyda, Olympia; Kirchner, Hanni; Glasauer, Stefan; Brandt, Thomas; Jahn, Klaus; Strupp, Michael

    2012-01-01

    The objective of this study was to compare the findings of the bedside head-impulse test (HIT), passive head rotation gain, and caloric irrigation in patients with cerebellar ataxia (CA). In 16 patients with CA and bilaterally pathological bedside HIT, vestibuloocular reflex (VOR) gains were measured during HIT and passive head rotation by scleral search coil technique. Eight of the patients had pathologically reduced caloric responsiveness, while the other eight had normal caloric responses. Those with normal calorics showed a slightly reduced HIT gain (mean ± SD: 0.73 ± 0.15). In those with pathological calorics, gains 80 and 100 ms after the HIT as well as the passive rotation VOR gains were significantly lower. The corrective saccade after head turn occurred earlier in patients with pathological calorics (111 ± 62 ms after onset of the HIT) than in those with normal calorics (191 ± 17 ms, p = 0.0064). We identified two groups of patients with CA: those with an isolated moderate HIT deficit only, probably due to floccular dysfunction, and those with combined HIT, passive rotation, and caloric deficit, probably due to a peripheral vestibular deficit. From a clinical point of view, these results show that the bedside HIT alone can be false-positive for establishing a diagnosis of a bilateral peripheral vestibular deficit in patients with CA. PMID:23162531

  20. Beyond statistical inference: A decision theory for science

    PubMed Central

    KILLEEN, PETER R.

    2008-01-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351

  1. Beyond statistical inference: a decision theory for science.

    PubMed

    Killeen, Peter R

    2006-08-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.

  2. The efficacy of protoporphyrin as a predictive biomarker for lead exposure in canvasback ducks: effect of sample storage time

    USGS Publications Warehouse

    Franson, J.C.; Hohman, W.L.; Moore, J.L.; Smith, M.R.

    1996-01-01

    We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of ≥0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.

  3. Nomogram for prediction of level 2 axillary lymph node metastasis in proven level 1 node-positive breast cancer patients.

    PubMed

    Jiang, Yanlin; Xu, Hong; Zhang, Hao; Ou, Xunyan; Xu, Zhen; Ai, Liping; Sun, Lisha; Liu, Caigang

    2017-09-22

    The current management of the axilla in level 1 node-positive breast cancer patients is axillary lymph node dissection regardless of the status of the level 2 axillary lymph nodes. The goal of this study was to develop a nomogram predicting the probability of level 2 axillary lymph node metastasis (L-2-ALNM) in patients with level 1 axillary node-positive breast cancer. We reviewed the records of 974 patients with pathology-confirmed level 1 node-positive breast cancer between 2010 and 2014 at the Liaoning Cancer Hospital and Institute. The patients were randomized 1:1 and divided into a modeling group and a validation group. Clinical and pathological features of the patients were assessed with uni- and multivariate logistic regression. A nomogram based on independent predictors for the L-2-ALNM identified by multivariate logistic regression was constructed. Independent predictors of L-2-ALNM by the multivariate logistic regression analysis included tumor size, Ki-67 status, histological grade, and number of positive level 1 axillary lymph nodes. The areas under the receiver operating characteristic curve of the modeling set and the validation set were 0.828 and 0.816, respectively. The false-negative rates of the L-2-ALNM nomogram were 1.82% and 7.41% for the predicted probability cut-off points of < 6% and < 10%, respectively, when applied to the validation group. Our nomogram could help predict L-2-ALNM in patients with level 1 axillary lymph node metastasis. Patients with a low probability of L-2-ALNM could be spared level 2 axillary lymph node dissection, thereby reducing postoperative morbidity.

  4. Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.

    PubMed

    Ryan, Andrew M; Burgess, James F; Dimick, Justin B

    2015-08-01

    To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.

  5. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  6. Oral challenge test with sodium metabisulfite in steroid-dependent asthmatic patients.

    PubMed

    Prieto, L; Juyol, M; Paricio, A; Martínez, M A; Palop, J; Castro, J

    1988-01-01

    Oral challenge tests were carried out with sodium metabisulfite solution doses of 0.5, 1, 10, 25, 50 mg and encapsulated doses of 100 and 200 mg, as well as with lactose-placebo, on 44 non-atopic patients with steroid-dependent bronchial asthma, without clinical evidence of intolerance to these agents. Only those patients with an acceptable and not very labile pulmonary function were tested. A single-blind challenge protocol was performed in 22 patients (sodium metabisulfite solutions at pH 2.2 to 2.6) and the positive responses were confirmed by double-blind challenge. The other 22 were tested directly in a double-blind manner (pH4). Initially, 6/44 presented a positive reaction. However, a careful analysis and the confirmation by double-blind challenge of the positive responses obtained with the single-blind test, allowed us to identify 4 false positive responses. Thus, the true prevalence of sulfite sensitivity in our population is 4.5%. A patient with intolerance to sulfite agents also suffered aspirin-induced asthma. The labile tendency of the pulmonary function of the asthmatic patients may have contributed to some false positive reactions and probably explain the very high prevalence found in some studies. It does not appear that the variations of pH decisively influence the result of the challenge test.

  7. The Effect of Scattering and Absorption on Noise from a Cavitating Noise Source in the Subsurface Ocean Layer.

    DTIC Science & Technology

    1981-06-01

    for a de- tection probability of PD and associated false alarm probability PFA (in dB). 21 - - - II V. REFERENCE MODEL A. INTRODUCTION In order to...space for which to choose HI . PFA = P (wI 0o)dw = Q(---) (26) j 0 Similarity, the miss probability=l-detection probability is obtained by integrating...31) = 2 (1+ (22 [()BT z] ~Z The input signal-to-noise ratio: S/N(input) - a2 (32) The probability of false alarm: PFA = Q[ tB(j-I) 1 (33) The

  8. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  9. Experimental evaluation of fingerprint verification system based on double random phase encoding

    NASA Astrophysics Data System (ADS)

    Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi

    2006-03-01

    We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.

  10. Diagnostic accuracy of an ultrasonic multiple transducer cardiac imaging system

    NASA Technical Reports Server (NTRS)

    Popp, R. L.; Brown, O. R.; Harrison, D. C.

    1975-01-01

    An ultrasonic multiple-transducer imaging system for intracardiac structure visualization is developed in order to simplify visualization of the human heart in vivo without radiation hazard or invasion of the body. Results of the evaluation of the diagnostic accuracy of the devised system in a clinical setting for adult patients are presented and discussed. Criteria are presented for recognition of mitral valva prolapse, mitral stenosis, pericardial effusion, atrial septal defect, and left ventricular dyssynergy. The probable cause for false-positive and false-negative diagnoses is discussed. However, hypertrophic myopathy and congestive myopathy were unable to be detected. Since only qualitative criteria were used, it was not possible to differentiate patients with left ventricular volume overload from patients without cardiac pathology.

  11. Heavy Metal Pollution Delineation Based on Uncertainty in a Coastal Industrial City in the Yangtze River Delta, China

    PubMed Central

    Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan

    2018-01-01

    Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0–20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution. PMID:29642623

  12. Heavy Metal Pollution Delineation Based on Uncertainty in a Coastal Industrial City in the Yangtze River Delta, China.

    PubMed

    Hu, Bifeng; Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou

    2018-04-10

    Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0-20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution.

  13. When can scientific studies promote consensus among conflicting stakeholders?

    PubMed

    Small, Mitchell J; Güvenç, Ümit; DeKay, Michael L

    2014-11-01

    While scientific studies may help conflicting stakeholders come to agreement on a best management option or policy, often they do not. We review the factors affecting trust in the efficacy and objectivity of scientific studies in an analytical-deliberative process where conflict is present, and show how they may be incorporated in an extension to the traditional Bayesian decision model. The extended framework considers stakeholders who differ in their prior beliefs regarding the probability of possible outcomes (in particular, whether a proposed technology is hazardous), differ in their valuations of these outcomes, and differ in their assessment of the ability of a proposed study to resolve the uncertainty in the outcomes and their hazards--as measured by their perceived false positive and false negative rates for the study. The Bayesian model predicts stakeholder-specific preposterior probabilities of consensus, as well as pathways for increasing these probabilities, providing important insights into the value of scientific information in an analytic-deliberative decision process where agreement is sought. It also helps to identify the interactions among perceived risk and benefit allocations, scientific beliefs, and trust in proposed scientific studies when determining whether a consensus can be achieved. The article provides examples to illustrate the method, including an adaptation of a recent decision analysis for managing the health risks of electromagnetic fields from high voltage transmission lines. © 2014 Society for Risk Analysis.

  14. Estimation of the limit of detection using information theory measures.

    PubMed

    Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago

    2014-01-31

    Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.; Lin, Shu

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  16. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  17. Informational need of emotional stress

    NASA Astrophysics Data System (ADS)

    Simonov, P. V.; Frolov, M. V.

    According to the informational theory of emotions[1], emotions in humans depend on the power of some need (motivation) and the estimation by the subject of the probability (possibility) of the need staisfaction (the goal achievement). Low probability of need satisfaction leads to negative emotions, actively minimized by the subject. Increased probability of satisfaction, as compared to earlier forecast, generates positive emotions, which the subject tries to maximize, i.e. to enhance, to prolong, to repeat. The informational theory of emotions encompasses their reflective function, the laws of their appearance, the regulatory significance of emotions, and their role in organization of behavior. The level of emotional stress influences the operator's performance. A decrease in the emotional tonus leads to drowsiness, lack of vigilance, missing of significant signals and to slower reactions. An extremely high stress level disorganizes the activity, complicates it with a trend toward incorrect actions and reactions to insignificant signals (false alarms). The neurophysiological mechanisms of the influence of emotions on perceptual activity and operator performance as well as the significance of individuality are discussed.

  18. A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.

    PubMed

    Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa

    2011-05-26

    Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.

  19. Interpretation of diagnostic data: 5. How to do it with simple maths.

    PubMed

    1983-11-01

    The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.

  20. Interpretation of diagnostic data: 5. How to do it with simple maths.

    PubMed Central

    1983-01-01

    The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182

  1. A false positive newborn screening result due to a complex allele carrying two frequent CF-causing variants.

    PubMed

    Bergougnoux, Anne; Boureau-Wirth, Amandine; Rouzier, Cécile; Altieri, Jean-Pierre; Verneau, Fanny; Larrieu, Lise; Koenig, Michel; Claustres, Mireille; Raynal, Caroline

    2016-05-01

    The detection of two frequent CFTR disease-causing variations in the context of a newborn screening program (NBS) usually leads to the diagnosis of cystic fibrosis (CF) and a relevant genetic counseling in the family. In the present study, CF-causing variants p.Phe508del (F508del) and c.3140-26A>G (3272-26A>G) were identified on a neonate with positive ImmunoReactive Trypsinogen test by the Elucigene™ CF30 kit. The CF diagnosis initially suggested, despite three inconclusive Sweat Chloride Tests (SCT), was finally ruled out after the familial segregation study combined with a negative SCT. Haplotype studies, based on the comparison of 80 p.Phe508del haplotypes, suggested a probable de novo occurrence of c.3140-26A>G on the p.Phe508del ancestral allele in this family. This false positive case emphasizes the importance of SCT in the NBS strategy. Moreover, it raises the need for familial segregation studies in CF and in overall molecular diagnosis strategy of autosomal recessive diseases. Copyright © 2016 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  2. An assessment of public health surveillance of Zika virus infection and potentially associated outcomes in Latin America.

    PubMed

    Bautista, Leonelo E; Herrera, Víctor M

    2018-05-24

    We evaluated whether outbreaks of Zika virus (ZIKV) infection, newborn microcephaly, and Guillain-Barré syndrome (GBS) in Latin America may be detected through current surveillance systems, and how cases detected through surveillance may increase health care burden. We estimated the sensitivity and specificity of surveillance case definitions using published data. We assumed a 10% ZIKV infection risk during a non-outbreak period and hypothetical increases in risk during an outbreak period. We used sensitivity and specificity estimates to correct for non-differential misclassification, and calculated a misclassification-corrected relative risk comparing both periods. To identify the smallest hypothetical increase in risk resulting in a detectable outbreak we compared the misclassification-corrected relative risk to the relative risk corresponding to the upper limit of the endemic channel (mean + 2 SD). We also estimated the proportion of false positive cases detected during the outbreak. We followed the same approach for microcephaly and GBS, but assumed the risk of ZIKV infection doubled during the outbreak, and ZIKV infection increased the risk of both diseases. ZIKV infection outbreaks were not detectable through non-serological surveillance. Outbreaks were detectable through serologic surveillance if infection risk increased by at least 10%, but more than 50% of all cases were false positive. Outbreaks of severe microcephaly were detected if ZIKV infection increased prevalence of this condition by at least 24.0 times. When ZIKV infection did not increase the prevalence of severe microcephaly, 34.7 to 82.5% of all cases were false positive, depending on diagnostic accuracy. GBS outbreaks were detected if ZIKV infection increased the GBS risk by at least seven times. For optimal GBS diagnosis accuracy, the proportion of false positive cases ranged from 29 to 54% and from 45 to 56% depending on the incidence of GBS mimics. Current surveillance systems have a low probability of detecting outbreaks of ZIKV infection, severe microcephaly, and GBS, and could result in significant increases in health care burden, due to the detection of large numbers of false positive cases. In view of these limitations, Latin American countries should consider alternative options for surveillance.

  3. A New Way to Confirm Planet Candidates

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-05-01

    What was the big deal behind the Kepler news conference yesterday? Its not just that the number of confirmed planets found by Kepler has more than doubled (though thats certainly exciting news!). Whats especially interesting is the way in which these new planets were confirmed.Number of planet discoveries by year since 1995, including previous non-Kepler discoveries (blue), previous Kepler discoveries (light blue) and the newly validated Kepler planets (orange). [NASA Ames/W. Stenzel; Princeton University/T. Morton]No Need for Follow-UpBefore Kepler, the way we confirmed planet candidates was with follow-up observations. The candidate could be validated either by directly imaging (which is rare) or obtaining a large number radial-velocity measurements of the wobble of the planets host star due to the planets orbit. But once Kepler started producing planet candidates, these approaches to validation became less feasible. A lot of Kepler candidates are small and orbit faint stars, making follow-up observations difficult or impossible.This problem is what inspired the development of whats known as probabilistic validation, an analysis technique that involves assessing the likelihood that the candidates signal is caused by various false-positive scenarios. Using this technique allows astronomers to estimate the likelihood of a candidate signal being a true planet detection; if that likelihood is high enough, the planet candidate can be confirmed without the need for follow-up observations.A breakdown of the catalog of Kepler Objects of Interest. Just over half had previously been identified as false positives or confirmed as candidates. 1284 are newly validated, and another 455 have FPP of1090%. [Morton et al. 2016]Probabilistic validation has been used in the past to confirm individual planet candidates in Kepler data, but now Timothy Morton (Princeton University) and collaborators have taken this to a new level: they developed the first code thats designed to do fully automated batch processing of a large number of candidates.In a recently published study the results of which were announced yesterday the teamapplied their code to the entire catalog of 7,470 Kepler objects of interest.New Planets and False PositivesThe teams code was able to successfully evaluate the total false-positive probability (FPP) for 7,056 of the objects of interest. Of these, 428 objects previously identified as candidates were found to have FPP of more than 90%, suggesting that they are most likely false positives.Periods and radii of candidate and confirmed planets in the Kepler Objects of Interest catalog. Blue circles have previously been identified as confirmed planets. Candidates (orange) are shaded by false positive probability; more transparent means more likely to be a false positive. [Morton et al. 2016]In contrast, 1,935 candidates were found to have FPP of less than 1%, and were therefore declared validated planets. Of these confirmations, 1,284 were previously unconfirmed, more than doubling Keplers previous catalog of 1,041 confirmed planets. Morton and collaborators believe that 9 of these newly confirmed planets may fall within the habitable zone of their host stars.While the announcement of 1,284 newly confirmed planets is huge, the analysis presented in this study is the real news. The code used is publicly available and can be applied to any transiting exoplanet candidate. This means that this analysis technique can be used to find batches of exoplanets in data from the extended Kepler mission (K2) or from the future TESS and PLATO transit missions.CitationTimothy D. Morton et al 2016 ApJ 822 86. doi:10.3847/0004-637X/822/2/86

  4. Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context.

    PubMed

    Tiwari, Aseem K; Pandey, Prashant K; Negi, Avinash; Bagga, Ruchika; Shanker, Ajay; Baveja, Usha; Vimarsh, Raina; Bhargava, Richa; Dara, Ravi C; Rawat, Ganesh

    2015-01-01

    Lab-diagnosis of hepatitis C virus (HCV) is based on detecting specific antibodies by enzyme immuno-assay (EIA) or chemiluminescence immuno-assay (CIA). Center for Disease Control reported that signal-to-cut-off (s/co) ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS(®) Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey) for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR) (Abbott). PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.

  5. Can missed breast cancer be recognized by regular peer auditing on screening mammography?

    PubMed

    Pan, Huay-Ben; Yang, Tsung-Lung; Hsu, Giu-Cheng; Chiang, Chia-Ling; Huang, Jer-Shyung; Chou, Chen-Pin; Wang, Yen-Chi; Liang, Huei-Lung; Lee, San-Kan; Chou, Yi-Hong; Wong, Kam-Fai

    2012-09-01

    This study was conducted to investigate whether detectable missed breast cancers could be distinguished from truly false negative images in a mammographic screening by a regular peer auditing. Between 2004 and 2007, a total of 311,193 free nationwide biennial mammographic screenings were performed for 50- to 69-year-old women in Taiwan. Retrospectively comparing the records in Taiwan's Cancer registry, 1283 cancers were detected (4.1 per 1000). Of the total, 176 (0.6 per 1000) initial mammographic negative assessments were reported to have cancers (128 traditional films and 48 laser-printed digital images). We selected 186 true negative films (138 traditional films and 48 laser-printed ones) as control group. These were seeded into 4815 films of 2008 images to be audited in 2009. Thirty-four auditors interpreted all the films in a single-blind, randomized, pair-control study. The performance of 34 auditors was analyzed by chi-square test. A p value of < 0.05 was considered significant. Eight (6 traditional and 2 digital films) of the 176 false negative films were not reported by the auditors (missing rate of 4.5%). Of this total, 87 false negatives were reassessed as positive, while 29 of the 186 true negatives were reassessed as positive, making the overall performance of the 34 auditors in interpreting the false negatives and true negatives a specificity of 84.4% and sensitivity of 51.8%. The specificity and sensitivity in traditional films and laser-printed films were 98.6% versus 43.8% and 41.8% versus 78.3%, respectively. Almost 42% of the traditional false negative films had positive reassessment by the auditors, showing a significant difference from the initial screeners (p < 0.001). The specificity of their reinterpretation of laser-printed films was obviously low. Almost 42% of the false negative traditional films were judged as missed cancers in this study. A peer auditing should reduce the probability of missed cancers. 2012 Published by Elsevier B.V

  6. Serum galactomannan screening for diagnosis of invasive pulmonary aspergillosis in children after stem cell transplantation or with high-risk leukemia.

    PubMed

    Gefen, Aharon; Zaidman, Irina; Shachor-Meyouhas, Yael; Avidor, Israela; Hakim, Fahed; Weyl Ben-Arush, Myriam; Kassis, Imad

    2015-03-01

    Both transplanted and leukemia patients are at high risk (HR) for invasive pulmonary aspergillosis (IPA). Methods for rapid diagnosis are crucial. Our objective was to investigate the impact of serial serum galactomannan assay (GMA) screening on IPA diagnosis in children. Between January 2010 and December 2011, all children following stem cell transplantation (SCT) or with HR leukemia were prospectively included. Serum samples for GMA were taken once-twice weekly. Results >.5 were considered positive. Patients suspected of having IPA were stratified as possible, probable, and definite. Forty-six children (median age, 8 years) were included, 38 after SCT (32 allogeneic), 8 with HR leukemia. A total of 510 samples were taken; screening period was 1-6 months for 34 patients. GMA was negative in 28 patients, all but one without suspicion of IPA. Eighteen patients had positive GMA: while four (22%) were upgraded to probable IPA, fourteen (78%) were considered as false positives (FP), some associated with piperacillin-tazobactam treatment. GMA sensitivity and specificity were 0.8 and 0.66, respectively; positive- and negative-predictive values (PPV, NPV) were 0.22 and 0.96, respectively. GMA may have a role in evaluating HR children for IPA. Both NPV and FP rates are high. The cost benefit of early detection versus over-diagnosis should be further studied.

  7. Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation.

    PubMed

    Wang, Z Jane; Wu, Min; Zhao, Hong Vicky; Trappe, Wade; Liu, K J Ray

    2005-06-01

    Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders K(max) are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images.

  8. Statistical behavior of ten million experimental detection limits

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-02-01

    Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.

  9. The role of backward associative strength in false recognition of DRM lists with multiple critical words.

    PubMed

    Beato, María S; Arndt, Jason

    2017-08-01

    Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.

  10. [A quickly methodology for drug intelligence using profiling of illicit heroin samples].

    PubMed

    Zhang, Jianxin; Chen, Cunyi

    2012-07-01

    The aim of the paper was to evaluate a link between two heroin seizures using a descriptive method. The system involved the derivation and gas chromatographic separation of samples followed by a fully automatic data analysis and transfer to a database. Comparisons used the square cosine function between two chromatograms assimilated to vectors. The method showed good discriminatory capabilities. The probability of false positives was extremely slight. In conclusion, this method proved to be efficient and reliable, which appeared suitable for estimating the links between illicit heroin samples.

  11. Shadow Probability of Detection and False Alarm for Median-Filtered SAR Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raynal, Ann Marie; Doerry, Armin Walter; Miller, John A.

    2014-06-01

    Median filtering reduces speckle in synthetic aperture radar (SAR) imagery while preserving edges, at the expense of coarsening the resolution, by replacing the center pixel of a sliding window by the median value. For shadow detection, this approach helps distinguish shadows from clutter more easily, while preserving shadow shape delineations. However, the nonlinear operation alters the shadow and clutter distributions and statistics, which must be taken into consideration when computing probability of detection and false alarm metrics. Depending on system parameters, median filtering can improve probability of detection and false alarm by orders of magnitude. Herein, we examine shadow probabilitymore » of detection and false alarm in a homogeneous, ideal clutter background after median filter post-processing. Some comments on multi-look processing effects with and without median filtering are also made.« less

  12. Detection of bacteriuria and pyuria by URISCREEN a rapid enzymatic screening test.

    PubMed Central

    Pezzlo, M T; Amsterdam, D; Anhalt, J P; Lawrence, T; Stratton, N J; Vetter, E A; Peterson, E M; de la Maza, L M

    1992-01-01

    A multicenter study was performed to evaluate the ability of the URISCREEN (Analytab Products, Plainview, N.Y.), a 2-min catalase tube test, to detect bacteriuria and pyuria. This test was compared with the Chemstrip LN (BioDynamics, Division of Boehringer Mannheim Diagnostics, Indianapolis, Ind.), a 2-min enzyme dipstick test; a semiquantitative plate culture method was used as the reference test for bacteriuria, and the Gram stain or a quantitative chamber count method was used as the reference test for pyuria. Each test was evaluated for its ability to detect probable pathogens at greater than or equal to 10(2) CFU/ml and/or greater than or equal to 1 leukocyte per oil immersion field, as determined by the Gram stain method, or greater than 10 leukocytes per microliter, as determined by the quantitative count method. A total of 1,500 urine specimens were included in this evaluation. There were 298 specimens with greater than or equal 10(2) CFU/ml and 451 specimens with pyuria. Of the 298 specimens with probable pathogens isolated at various colony counts, 219 specimens had colony counts of greater than or equal to 10(5) CFU/ml, 51 specimens had between 10(4) and 10(5) CFU/ml, and 28 specimens had between 10(2) and less than 10(4) CFU/ml. Both the URISCREEN and the Chemstrip LN detected 93% (204 of 219) of the specimens with probable pathogens at greater than or equal to 10(5) CFU/ml. For the specimens with probable pathogens at greater than or equal to 10(2) CFU/ml, the sensitivities of the URISCREEN and the Chemstrip LN were 86% (256 of 298) and 81% (241 of 298), respectively. Of the 451 specimens with pyuria, the URISCREEN detected 88% (398 of 451) and Chemstrip LN detected 78% (350 if 451). There were 204 specimens with both greater than or equal to 10(2) CFU/ml and pyuria; the sensitivities of both methods were 95% (193 of 204) for these specimens. Overall, there were 545 specimens with probable pathogens at greater than or equal to 10(2) CFU/ml and/or pyuria. The URISCREEN detected 85% (461 of 545), and the Chemstrip LN detected 73% (398 of 545). A majority (76%) of the false-negative results obtained with either method were for specimens without leukocytes in the urine. There were 955 specimens with no probable pathogens or leukocytes. Of these, 28% (270 of 955) were found positive by the URISCREEN and 13% (122 of 955) were found positive by the Chemstrip LN. A majority of the false-positive results were probably due, in part, to the detection of enzymes present in both bacterial and somatic cells by each of the test systems. Overall, the URISCREEN is rapid, manual, easy-to-perform enzymatic test that yields findings similar to those yielded by the Chemstrip LN for specimens with both greater than or equal to 10(2) CFU/ml and pyuria or for specimens with greater than or equal to 10(5) CFU/ml and with or without pyuria. However, when the data were analyzed for either probable pathogens at less 10(5) CFU/ml or pyuria, the sensitivity of the URISCREEN was higher (P less than 0.05). PMID:1551986

  13. Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study

    PubMed Central

    Le Strat, Yann

    2017-01-01

    The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489

  14. Environmentally Adaptive UXO Detection and Classification Systems

    DTIC Science & Technology

    2016-04-01

    probability of false alarm ( Pfa ), as well as Receiver Op- erating Characteristic (ROC) curve and confusion matrix characteristics. The results of these...techniques at a false alarm probability of Pfa = 1× 10−3. X̃ = g(X). In this case, the problem remains invariant to the group of transformations G = { g : g(X...and observed target responses as well as the probability of detection versus SNR for both detection techniques at Pfa = 1× 10−3. with N = 128 and M = 50

  15. Optimizing the interpretation of CT for appendicitis: modeling health utilities for clinical practice.

    PubMed

    Blackmore, C Craig; Terasawa, Teruhiko

    2006-02-01

    Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.

  16. Automatic recognition of coronal type II radio bursts: The ARBIS 2 method and first observations

    NASA Astrophysics Data System (ADS)

    Lobzin, Vasili; Cairns, Iver; Robinson, Peter; Steward, Graham; Patterson, Garth

    Major space weather events such as solar flares and coronal mass ejections are usually accompa-nied by solar radio bursts, which can potentially be used for real-time space weather forecasts. Type II radio bursts are produced near the local plasma frequency and its harmonic by fast electrons accelerated by a shock wave moving through the corona and solar wind with a typi-cal speed of 1000 km s-1 . The coronal bursts have dynamic spectra with frequency gradually falling with time and durations of several minutes. We present a new method developed to de-tect type II coronal radio bursts automatically and describe its implementation in an extended Automated Radio Burst Identification System (ARBIS 2). Preliminary tests of the method with spectra obtained in 2002 show that the performance of the current implementation is quite high, ˜ 80%, while the probability of false positives is reasonably low, with one false positive per 100-200 hr for high solar activity and less than one false event per 10000 hr for low solar activity periods. The first automatically detected coronal type II radio bursts are also presented. ARBIS 2 is now operational with IPS Radio and Space Services, providing email alerts and event lists internationally.

  17. [Diagnostic work-up of pulmonary nodules : Management of pulmonary nodules detected with low‑dose CT screening].

    PubMed

    Wormanns, D

    2016-09-01

    Pulmonary nodules are the most frequent pathological finding in low-dose computed tomography (CT) scanning for early detection of lung cancer. Early stages of lung cancer are often manifested as pulmonary nodules; however, the very commonly occurring small nodules are predominantly benign. These benign nodules are responsible for the high percentage of false positive test results in screening studies. Appropriate diagnostic algorithms are necessary to reduce false positive screening results and to improve the specificity of lung cancer screening. Such algorithms are based on some of the basic principles comprehensively described in this article. Firstly, the diameter of nodules allows a differentiation between large (>8 mm) probably malignant and small (<8 mm) probably benign nodules. Secondly, some morphological features of pulmonary nodules in CT can prove their benign nature. Thirdly, growth of small nodules is the best non-invasive predictor of malignancy and is utilized as a trigger for further diagnostic work-up. Non-invasive testing using positron emission tomography (PET) and contrast enhancement as well as invasive diagnostic tests (e.g. various procedures for cytological and histological diagnostics) are briefly described in this article. Different nodule morphology using CT (e.g. solid and semisolid nodules) is associated with different biological behavior and different algorithms for follow-up are required. Currently, no obligatory algorithm is available in German-speaking countries for the management of pulmonary nodules, which reflects the current state of knowledge. The main features of some international and American recommendations are briefly presented in this article from which conclusions for the daily clinical use are derived.

  18. Finding strong lenses in CFHTLS using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  19. Inflammatory myofibroblastic tumor of the mesentery associated with high fever and positive Widal test.

    PubMed

    Chouairy, Camil J; Bechara, Elie A; Gebran, Sleiman J; Ghabril, Ramy H

    2008-12-01

    Inflammatory myofibroblastic tumor (IMT) is associated in 15-30% of cases with systemic symptomatology, such as prolonged fever, weight loss, elevated erythrocyte sedimentation rate (ESR), anemia, thrombocytosis, and leukocytosis. We report the case of a 4-year-old Lebanese boy who presented with high-grade fever of long duration, and a single (unpaired) positive Widal agglutination test. Blood culture was negative. A diagnosis of typhoid fever was made. An abdominal (mesenteric) IMT was incidentally discovered, 30 days after the fever had appeared. After surgery, the fever disappeared immediately, and the ESR returned to normal. We strongly favor the possibility of a false positive Widal test, due to polyclonal increase in serum immunoglobulins, which often occurs in IMT. We also think that IMT might be a mimicker of typhoid fever, both clinically and serologically. Physicians, especially pediatricians practicing in endemic areas, should probably be aware of this mimicry.

  20. Entanglement-enhanced Neyman-Pearson target detection using quantum illumination

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.

  1. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  2. Application of data fusion technology based on D-S evidence theory in fire detection

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan; Chen, Musheng

    2015-12-01

    Judgment and identification based on single fire characteristic parameter information in fire detection is subject to environmental disturbances, and accordingly its detection performance is limited with the increase of false positive rate and false negative rate. The compound fire detector employs information fusion technology to judge and identify multiple fire characteristic parameters in order to improve the reliability and accuracy of fire detection. The D-S evidence theory is applied to the multi-sensor data-fusion: first normalize the data from all sensors to obtain the normalized basic probability function of the fire occurrence; then conduct the fusion processing using the D-S evidence theory; finally give the judgment results. The results show that the method meets the goal of accurate fire signal identification and increases the accuracy of fire alarm, and therefore is simple and effective.

  3. Brain Tumor Segmentation Using Deep Belief Networks and Pathological Knowledge.

    PubMed

    Zhan, Tianming; Chen, Yi; Hong, Xunning; Lu, Zhenyu; Chen, Yunjie

    2017-01-01

    In this paper, we propose an automatic brain tumor segmentation method based on Deep Belief Networks (DBNs) and pathological knowledge. The proposed method is targeted against gliomas (both low and high grade) obtained in multi-sequence magnetic resonance images (MRIs). Firstly, a novel deep architecture is proposed to combine the multi-sequences intensities feature extraction with classification to get the classification probabilities of each voxel. Then, graph cut based optimization is executed on the classification probabilities to strengthen the spatial relationships of voxels. At last, pathological knowledge of gliomas is applied to remove some false positives. Our method was validated in the Brain Tumor Segmentation Challenge 2012 and 2013 databases (BRATS 2012, 2013). The performance of segmentation results demonstrates our proposal providing a competitive solution with stateof- the-art methods. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.

    2013-01-01

    We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.

  5. ChemStable: a web server for rule-embedded naïve Bayesian learning approach to predict compound stability.

    PubMed

    Liu, Zhihong; Zheng, Minghao; Yan, Xin; Gu, Qiong; Gasteiger, Johann; Tijhuis, Johan; Maas, Peter; Li, Jiabo; Xu, Jun

    2014-09-01

    Predicting compound chemical stability is important because unstable compounds can lead to either false positive or to false negative conclusions in bioassays. Experimental data (COMDECOM) measured from DMSO/H2O solutions stored at 50 °C for 105 days were used to predicted stability by applying rule-embedded naïve Bayesian learning, based upon atom center fragment (ACF) features. To build the naïve Bayesian classifier, we derived ACF features from 9,746 compounds in the COMDECOM dataset. By recursively applying naïve Bayesian learning from the data set, each ACF is assigned with an expected stable probability (p(s)) and an unstable probability (p(uns)). 13,340 ACFs, together with their p(s) and p(uns) data, were stored in a knowledge base for use by the Bayesian classifier. For a given compound, its ACFs were derived from its structure connection table with the same protocol used to drive ACFs from the training data. Then, the Bayesian classifier assigned p(s) and p(uns) values to the compound ACFs by a structural pattern recognition algorithm, which was implemented in-house. Compound instability is calculated, with Bayes' theorem, based upon the p(s) and p(uns) values of the compound ACFs. We were able to achieve performance with an AUC value of 84% and a tenfold cross validation accuracy of 76.5%. To reduce false negatives, a rule-based approach has been embedded in the classifier. The rule-based module allows the program to improve its predictivity by expanding its compound instability knowledge base, thus further reducing the possibility of false negatives. To our knowledge, this is the first in silico prediction service for the prediction of the stabilities of organic compounds.

  6. First European interlaboratory comparison of tetracycline and age determination with red fox teeth following oral rabies vaccination programs.

    PubMed

    Robardet, Emmanuelle; Demerson, Jean-Michel; Andrieu, Sabrina; Cliquet, Florence

    2012-10-01

    The first European interlaboratory comparison of tetracycline and age determination with red fox (Vulpes vulpes) tooth samples was organized by the European Union Reference Laboratory for rabies. Performance and procedures implemented by member states were compared. These techniques are widely used to monitor bait uptake in European oral rabies vaccination campaigns. A panel of five red fox half-mandibles comprising one weak positive juvenile sample, two positive adult samples, one negative juvenile sample, and one negative adult sample were sent, along with a technical questionnaire, to 12 laboratories participating on a voluntary basis. The results of only three laboratories (25%) were 100% correct. False-negative results were more frequently seen in weak positive juvenile samples (58%) but were infrequent in positive adult samples (4%), probably due to differences in the ease of reading the two groups of teeth. Four laboratories (44%) had correct results for age determination on all samples. Ages were incorrectly identified in both adult and juvenile samples, with 11 and 17% of discordant results, respectively. Analysis of the technical questionnaires in parallel with test results suggested that all laboratories cutting mandible sections between the canine and first premolar obtained false results. All the laboratories using longitudinal rather than transverse sections and those not using a mounting medium also produced false results. Section thickness appeared to affect the results; no mistakes were found in laboratories using sections <150 μm thick. Factors having a potential impact on the success of laboratories were discussed, and recommendations proposed. Such interlaboratory trials underline the importance of using standardized procedures for biomarker detection in oral rabies vaccination campaigns. Several changes can be made to improve analysis quality and increase the comparability of bait uptake frequencies among member states.

  7. Image Security

    DTIC Science & Technology

    1999-01-01

    34. twenty-first century. These papers illustrate topics such as a development ofvirtual environment applications, different uses ofVRML in information system...interfaces, an examination of research in virtual reality environment interfaces, and five approaches to supporting changes’ in virtuaI environments...we get false negatives that contribute to the probability of false rejection Prrj). { l � Taking these error probabilities into account, we define a

  8. Bayesian microsaccade detection

    PubMed Central

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  9. Sensitivity of perianal tape impressions to diagnose pinworm (Syphacia spp.) infections in rats (Rattus norvegicus) and mice (Mus musculus).

    PubMed

    Hill, William Allen; Randolph, Mildred M; Mandrell, Timothy D

    2009-07-01

    We determined the sensitivity of perianal tape impressions to detect Syphacia spp. in rats and mice. We evaluated 300 rat and 200 mouse perianal impressions over 9 wk. Pinworm-positive perianal tape impressions from animals with worm burdens at necropsy were considered as true positives. Conversely, pinworm-negative perianal tape impressions from animals with worm burdens were considered false negatives. The sensitivity of perianal tape impressions for detecting Syphacia muris infections in rats was 100%, and for detecting Syphacia obvelata in mice was 85.5%. Intermittent shedding of Syphacia obvelata ova is the most probable explanation for the decreased sensitivity rate we observed in mice. We urge caution in use of perianal tape impressions alone for Syphacia spp. screening in sentinel mice and rats.

  10. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less

  11. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  12. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    PubMed

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  13. Auditory threshold shifts after glycerol administration to patients with suspected Menière's disease: a retrospective analysis.

    PubMed

    Basel, Türker; Lütkenhöner, Bernd

    2013-01-01

    Nearly half a century ago, administration of glycerol was shown to temporarily improve the threshold of hearing in patients with suspected Menière's disease (glycerol test). Although a positive test result provides strong evidence of Menière's disease, the test has not gained widespread acceptance. A probable reason is that there is no consensus as to the definition of positive. Moreover, a negative test result is of little diagnostic value because Menière's disease cannot be excluded. By reanalyzing archived data, the authors sought to understand the test in light of signal detection theory. Moreover, they explored the possibility of estimating the probability of a positive test result from the pretest audiogram. The study is based on audiograms from 347 patients (356 ears) who underwent a glycerol test to corroborate a suspected diagnosis of Menière's disease. Subsequent to an initial pure-tone audiogram, glycerol (1.2 mL/kg body weight) was orally administered; follow-up audiograms were obtained after 1, 2, 3, and 4 hr. Transcription of the audiograms into a computer-readable form made them available for automated reanalysis. Averaged difference audiograms provided detailed insight into the frequency dependence and the temporal dynamics of the glycerol-induced threshold reduction. The strongest threshold reduction was observed 4 hr after glycerol intake, although nearly the same effect was already found after 3 hr. Strong overall threshold reductions were associated with a pronounced maximum at approximately 1000 Hz; weaker effects were associated with a plateau between 125 and 1000 Hz and a rapid decrease toward higher frequencies. To date, criteria suggested for a positive test result vastly differ in both sensitivity (with regard to the detection of a threshold reduction) and specificity (1 minus false-positive rate). Here, a criterion based on the aggregate threshold reduction in adjacent audiometric frequencies is suggested. This approach does not only seem to be more robust but also permits to freely adjust the false-positive rate. A positive test result is particularly likely when the mean low-frequency hearing loss is approximately 60 dB and the mean high-frequency hearing loss does not exceed 50 dB. If the pretest audiogram does not render a positive test result unlikely, a state-of-the-art implementation of the glycerol test is a competitive method for corroborating a suspected diagnosis of Menière's disease.

  14. Limited value of whole blood Xpert(®) MTB/RIF for diagnosing tuberculosis in children.

    PubMed

    Pohl, Christian; Rutaihwa, Liliana K; Haraka, Frederick; Nsubuga, Martin; Aloi, Francesco; Ntinginya, Nyanda E; Mapamba, Daniel; Heinrich, Norbert; Hoelscher, Michael; Marais, Ben J; Jugheli, Levan; Reither, Klaus

    2016-10-01

    We evaluated the ability of the Xpert(®) MTB/RIF assay to detect Mycobacterium tuberculosis in whole blood of children with tuberculosis in tuberculosis endemic settings with high rates of HIV infection. From June 2011 to September 2012 we prospectively enrolled children with symptoms or signs suggestive of tuberculosis at three research centres in Tanzania and Uganda. After clinical assessment, respiratory specimens were collected for microscopy and culture, as well as whole blood for Xpert(®) MTB/RIF. Children were classified according to standardised case definitions. A total of 232 children were evaluated; 14 (6.0%) had culture-confirmed tuberculosis. The Xpert(®) MTB/RIF assay detected M. tuberculosis in 5/232 (2.2%) blood samples with 1 (0.4%) error reading and presumably 1 (0.4%) false-positive result. The sensitivity of the assay in children with culture-confirmed (1/14) versus no tuberculosis (1/117) was 7.1% (95% CI, 1.3-31.5). Three of the five Xpert(®) MTB/RIF positive patients had negative cultures, but were classified as probable tuberculosis cases. Assay sensitivity against a composite reference standard (culture-confirmed, highly probable or probable tuberculosis) was 5.4% (95% CI, 2.1-13.1). Whole blood Xpert(®) MTB/RIF demonstrated very poor sensitivity, although it may enhance the diagnostic yield in select cases, with culture-negative tuberculosis. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  15. Minimal preparation computed tomography instead of barium enema/colonoscopy for suspected colon cancer in frail elderly patients: an outcome analysis study.

    PubMed

    Kealey, S M; Dodd, J D; MacEneaney, P M; Gibney, R G; Malone, D E

    2004-01-01

    To evaluate the efficacy of minimal preparation computed tomography (MPCT) in diagnosing clinically significant colonic tumours in frail, elderly patients. A prospective study was performed in a group of consecutively referred, frail, elderly patients with symptoms or signs of anaemia, pain, rectal bleeding or weight loss. The MPCT protocol consisted of 1.5 l Gastrografin 1% diluted with sterile water administered during the 48 h before the procedure with no bowel preparation or administration of intravenous contrast medium. Eight millimetre contiguous scans through the abdomen and pelvis were performed. The scans were double-reported by two gastrointestinal radiologists as showing definite (>90% certain), probable (50-90% certain), possible (<50% certain) neoplasm or normal. Where observers disagreed the more pessimistic of the two reports was accepted. The gold standard was clinical outcome at 1 year with positive end-points defined as (1) histological confirmation of CRC, (2) clinical presentation consistent with CRC without histological confirmation if the patient was too unwell for biopsy/surgery, and (3) death directly attributable to colorectal carcinoma (CRC) with/without post-mortem confirmation. Negative end-points were defined as patients with no clinical, radiological or post-mortem findings of CRC. Patients were followed for 1 year or until one of the above end-points were met. Seventy-two patients were included (mean age 81; range 62-93). One-year follow-up was completed in 94.4% (n=68). Mortality from all causes was 33% (n=24). Five histologically proven tumours were diagnosed with CT and there were two probable false-negatives. Results were analysed twice: assuming all CT lesions test positive and considering "possible" lesions test negative [brackets] (95% confidence intervals): sensitivity 0.88 (0.47-1.0) [0.75 (0.35-0.97)], specificity 0.47 (0.34-0.6) [0.87 (0.75-0.94)], positive predictive value 0.18 [0.43], negative predictive value 0.97 [0.96], positive likelihood ratio result 1.6 [5.63], negative likelihood ratio result 0.27 [0.29], kappa 0.31 [0.43]. Tumour prevalence was 12%. A graph of conditional probabilities was generated and analysed. A variety of unsuspected pathology was also found in this series of patients. MPCT should be double-reported, at least initially. "Possible" lesions should be ignored. Analysis of the graph of conditional probability applied to a group of frail, elderly patients with a high mortality from all causes (33% in our study) suggests: (1) if MPCT suggests definite or probable carcinoma, regardless of the pre-test probability, the post-test probability is high enough to warrant further action, (2) frail, elderly patients with a low pre-test probability for CRC and a negative MPCT should not have further investigation, (3) frail, elderly patients with a higher pre-test probability of CRC (such as those presenting with rectal bleeding) and a negative MPCT should have either double contrast barium enema (DCBE) or colonoscopy as further investigations or be followed clinically for 3-6 months. MPCT was acceptable to patients and clinicians and may reveal significant extra-colonic pathology.

  16. 47 CFR 1.1623 - Probability calculation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...

  17. Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures

    NASA Astrophysics Data System (ADS)

    Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.

    2007-03-01

    The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.

  18. Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Tsehay, Yohannes K.; Lay, Nathan S.; Roth, Holger R.; Wang, Xiaosong; Kwak, Jin Tae; Turkbey, Baris I.; Pinto, Peter A.; Wood, Brad J.; Summers, Ronald M.

    2017-03-01

    Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD's (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.

  19. Many tests of significance: new methods for controlling type I errors.

    PubMed

    Keselman, H J; Miller, Charles W; Holland, Burt

    2011-12-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.

  20. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. 21 CFR 1316.10 - Administrative probable cause.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Administrative probable cause. 1316.10 Section 1316.10 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE ADMINISTRATIVE FUNCTIONS, PRACTICES, AND PROCEDURES Administrative Inspections § 1316.10 Administrative probable cause. If the judge...

  2. Evaluation of a new nanoparticle-based lateral-flow immunoassay for the exclusion of heparin-induced thrombocytopenia (HIT).

    PubMed

    Sachs, Ulrich J; von Hesberg, Jakob; Santoso, Sentot; Bein, Gregor; Bakchoul, Tamam

    2011-12-01

    Heparin-induced thrombocytopenia (HIT) is an adverse complication of heparin caused by HIT antibodies (abs) that recognise platelet factor 4-heparin (PF4/hep) complexes. Several laboratory tests are available for the confirmation and/or refutation of HIT. A reliable and rapid single-sample test is still pending. It was the objective of this study to evaluate a new lateral-flow immunoassay based on nanoparticle technology. A cohort of 452 surgical and medical patients suspected of having HIT was evaluated. All samples were tested in two IgG-specific ELISAs, in a particle gel immunoassay (PaGIA) and in a newly developed lateral-flow immunoassay (LFI-HIT) as well as in a functional test (HIPA). Clinical pre-test probability was determined using 4T's score. Platelet-activating antibodies were present in 34/452 patients, all of whom had intermediate to high clinical probability. PF4/hep abs were detected in 79, 87, 86, and 63 sera using the four different immunoassays. The negative predictive values (NPV) were 100% for both ELISA tests and LFI-HIT but only 99.2% for PaGIA. There were less false positives (n=29) in the LFI-HIT compared to any other test. Additionally, significantly less time was required to perform LFI-HIT than to perform the other immunoassays. In conclusion, a newly developed lateral-flow assay, LFI-HIT, was capable of identifying all HIT patients in a cohort in a short period of time. Beside an NPV of 100%, the rate of false-positive signals is significantly lower with LFI-HIT than with other immunoassay(s). These performance characteristics suggest a high potency in reducing the risk and costs in patients suspected of having HIT.

  3. Development and Validation of the Suprathreshold Stochastic Resonance-Based Image Processing Method for the Detection of Abdomino-pelvic Tumor on PET/CT Scans.

    PubMed

    Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.

  4. Effect of Divided Attention on the Production of False Memories in the DRM Paradigm: A Study of Dichotic Listening and Shadowing

    ERIC Educational Resources Information Center

    Pimentel, Eduarda; Albuquerque, Pedro B.

    2013-01-01

    The Deese/Roediger-McDermott (DRM) paradigm comprises the study of lists in which words (e.g., bed, pillow, etc.) are all associates of a single nonstudied critical item (e.g., sleep). The probability of falsely recalling or recognising nonstudied critical items is often similar to (or sometimes higher than) the probability of correctly recalling…

  5. Monitoring and modeling to predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2006-01-01

    The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.

  6. Sensitivity and specificity of oral HPV detection for HPV-positive head and neck cancer.

    PubMed

    Gipson, Brooke J; Robbins, Hilary A; Fakhry, Carole; D'Souza, Gypsyamber

    2018-02-01

    The incidence of HPV-related head and neck squamous cell carcinoma (HPV-HNSCC) is increasing. Oral samples are easy and non-invasive to collect, but the diagnostic accuracy of oral HPV detection methods for classifying HPV-positive HNSCC tumors has not been well explored. In a systematic review, we identified eight studies of HNSCC patients meeting our eligibility criteria of having: (1) HPV detection in oral rinse or oral swab samples, (2) tumor HPV or p16 testing, (3) a publication date within the last 10 years (January 2007-May 2017, as laboratory methods change), and (4) at least 15 HNSCC cases. Data were abstracted from each study and a meta-analysis performed to calculate sensitivity and specificity. Eight articles meeting inclusion criteria were identified. Among people diagnosed with HNSCC, oral HPV detection has good specificity (92%, 95% CI = 82-97%) and moderate sensitivity (72%, 95% CI = 45-89%) for HPV-positive HNSCC tumor. Results were similar when restricted to studies with only oropharyngeal cancer cases, with oral rinse samples, or testing for HPV16 DNA (instead of any oncogenic HPV) in the oral samples. Among those who already have HNSCC, oral HPV detection has few false-positives but may miss one-half to one-quarter of HPV-related cases (false-negatives). Given these findings in cancer patients, the utility of oral rinses and swabs as screening tests for HPV-HNSCC among healthy populations is probably limited. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Can Statistical Machine Learning Algorithms Help for Classification of Obstructive Sleep Apnea Severity to Optimal Utilization of Polysomnography Resources?

    PubMed

    Bozkurt, Selen; Bostanci, Asli; Turhan, Murat

    2017-08-11

    The goal of this study is to evaluate the results of machine learning methods for the classification of OSA severity of patients with suspected sleep disorder breathing as normal, mild, moderate and severe based on non-polysomnographic variables: 1) clinical data, 2) symptoms and 3) physical examination. In order to produce classification models for OSA severity, five different machine learning methods (Bayesian network, Decision Tree, Random Forest, Neural Networks and Logistic Regression) were trained while relevant variables and their relationships were derived empirically from observed data. Each model was trained and evaluated using 10-fold cross-validation and to evaluate classification performances of all methods, true positive rate (TPR), false positive rate (FPR), Positive Predictive Value (PPV), F measure and Area Under Receiver Operating Characteristics curve (ROC-AUC) were used. Results of 10-fold cross validated tests with different variable settings promisingly indicated that the OSA severity of suspected OSA patients can be classified, using non-polysomnographic features, with 0.71 true positive rate as the highest and, 0.15 false positive rate as the lowest, respectively. Moreover, the test results of different variables settings revealed that the accuracy of the classification models was significantly improved when physical examination variables were added to the model. Study results showed that machine learning methods can be used to estimate the probabilities of no, mild, moderate, and severe obstructive sleep apnea and such approaches may improve accurate initial OSA screening and help referring only the suspected moderate or severe OSA patients to sleep laboratories for the expensive tests.

  8. Linear landmark extraction in SAR images with application to augmented integrity aero-navigation: an overview to a novel processing chain

    NASA Astrophysics Data System (ADS)

    Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.

    2011-10-01

    In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.

  9. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  10. Towards more efficient burn care: Identifying factors associated with good quality of life post-burn.

    PubMed

    Finlay, V; Phillips, M; Allison, G T; Wood, F M; Ching, D; Wicaksono, D; Plowman, S; Hendrie, D; Edgar, D W

    2015-11-01

    As minor burn patients constitute the vast majority of a developed nation case-mix, streamlining care for this group can promote efficiency from a service-wide perspective. This study tested the hypothesis that a predictive nomogram model that estimates likelihood of good long-term quality of life (QoL) post-burn is a valid way to optimise patient selection and risk management when applying a streamlined model of care. A sample of 224 burn patients managed by the Burn Service of Western Australia who provided both short and long-term outcomes was used to estimate the probability of achieving a good QoL defined as 150 out of a possible 160 points on the Burn Specific Health Scale-Brief (BSHS-B) at least six months from injury. A multivariate logistic regression analysis produced a predictive model provisioned as a nomogram for clinical application. A second, independent cohort of consecutive patients (n=106) was used to validate the predictive merit of the nomogram. Male gender (p=0.02), conservative management (p=0.03), upper limb burn (p=0.04) and high BSHS-B score within one month of burn (p<0.001) were significant predictors of good outcome at six months and beyond. A Receiver Operating Curve (ROC) analysis demonstrated excellent (90%) accuracy overall. At 80% probability of good outcome, the false positive risk was 14%. The nomogram was validated by running a second ROC analysis of the model in an independent cohort. The analysis confirmed high (86%) overall accuracy of the model, the risk of false positive was reduced to 10% at a lower (70%) probability. This affirms the stability of the nomogram model in different patient groups over time. An investigation of the effect of missing data on sample selection determined that a greater proportion of younger patients with smaller TBSA burns were excluded due to loss to follow up. For clinicians managing comparable burn populations, the BSWA burns nomogram is an effective tool to assist the selection of patients to a streamlined care pathway with the aim of improving efficiency of service delivery. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  11. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    NASA Technical Reports Server (NTRS)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  12. Veterinary diagnostic imaging: Probability, accuracy and impact.

    PubMed

    Lamb, Christopher R

    2016-09-01

    Diagnostic imaging is essential for diagnosis and management of many common problems in veterinary medicine, but imaging is not 100% accurate and does not always benefit the animal in the way intended. When assessing the need for imaging, the probability that the animal has a morphological lesion, the accuracy of the imaging and the likelihood of a beneficial impact on the animal must all be considered. Few imaging tests are sufficiently accurate that they enable a diagnosis to be ruled in or out; instead, the results of imaging only modify the probability of a diagnosis. Potential problems with excessive use of imaging include false positive diagnoses, detection of incidental findings and over-diagnosis, all of which may contribute to a negative benefit to the animal. Veterinary clinicians must be selective in their use of imaging, use existing clinical information when interpreting images and sensibly apply the results of imaging in the context of the needs of individual animals. There is a need for more clinical research to assess the impact of diagnostic imaging for animals with common conditions to help clinicians make decisions conducive to optimal patient care. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle

    PubMed Central

    Frost, Anja; Renners, Eike; Hötter, Michael; Ostermann, Jörn

    2013-01-01

    An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction. PMID:23344378

  14. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  15. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  16. Removing Parallax-Induced False Changes in Change Detection

    DTIC Science & Technology

    2014-03-27

    viii Figure Page 11 Three hypothetical ROC curves. The probability of detection (PD) is plotted against the probability of false alarm ( PFA ) based on...red and green) approach the value of PD = 1 and PFA = 0, the detector performance is said to improve. . . . . . . . . . . . . . . . 32 12 Possible... sorption are commonly among those with low SNRs as the gases and vapor in the atmosphere between the (airborne) sensor and the ground plane tend to

  17. Expert system constant false alarm rate processor

    NASA Astrophysics Data System (ADS)

    Baldygo, William J., Jr.; Wicks, Michael C.

    1993-10-01

    The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.

  18. Regulatory issues with multiplicity in drug approval: Principles and controversies in a changing landscape.

    PubMed

    Benda, Norbert; Brandt, Andreas

    2018-01-01

    Recently, new draft guidelines on multiplicity issues in clinical trials have been issued by European Medicine Agency (EMA) and Food and Drug Administration (FDA), respectively. Multiplicity is an issue in clinical trials, if the probability of a false-positive decision is increased by insufficiently accounting for testing multiple hypotheses. We outline the regulatory principles related to multiplicity issues in confirmatory clinical trials intended to support a marketing authorization application in the EU, describe the reasons for an increasing complexity regarding multiple hypotheses testing and discuss the specific multiplicity issues emerging within the regulatory context and being relevant for drug approval.

  19. Added Value of Contrast-Enhanced Spectral Mammography in Postscreening Assessment.

    PubMed

    Tardivel, Anne-Marie; Balleyguier, Corinne; Dunant, Ariane; Delaloge, Suzette; Mazouni, Chafika; Mathieu, Marie-Christine; Dromain, Clarisse

    2016-09-01

    To assess the value on diagnostic and treatment management of contrast-enhanced spectral mammography (CESM), as adjunct to mammography (MG) and ultrasound (US) in postscreening in a breast cancer unit for patients with newly diagnosed breast cancer or with suspicious findings on conventional imaging. Retrospective review of routine use of bilateral CESM performed between September 2012 and September 2013 in 195 women with suspicious or undetermined findings on MG and/or US. CESM images were blindly reviewed by two radiologists for BI-RADS(®) assessment and probability of malignancy. Each lesion was definitely confirmed either with histopathology or follow-up. Two hundred and ninety-nine lesions were detected (221 malignant). CESM sensitivity, specificity, positive-predictive value and negative-predictive value were 94% (CI: 89-96%), 74% (CI: 63-83%), 91% (CI: 86-94%) and 81% (CI: 70-89%), respectively, with 18 false positive and 14 false negative. CESM changed diagnostic and treatment strategy in 41 (21%) patients either after detection of additional malignant lesions in 38 patients (19%)-with a more extensive surgery (n = 21) or neo-adjuvant chemotherapy (n = 1)-or avoiding further biopsy for 20 patients with negative CESM. CESM can be performed easily in a clinical assessment after positive breast cancer screening and may change significantly the diagnostic and treatment strategy through breast cancer staging. © 2016 Wiley Periodicals, Inc.

  20. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Justin; Wolpert, David; Neil, Joshua

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  1. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  2. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE PAGES

    Grana, Justin; Wolpert, David; Neil, Joshua; ...

    2016-03-11

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  3. [Which initial tests should be performed to evaluate meno-metrorrhagias? A comparison of hysterography, transvaginal sonohysterography and hysteroscopy].

    PubMed

    Descargues, G; Lemercier, E; David, C; Genevois, A; Lemoine, J P; Marpeau, L

    2001-02-01

    Evaluate the feasibility and the value of hysterography, sonohysterography and hysteroscopy for investigation of abnormal uterine bleeding. Method. Longitudinal blind study of thirty-eight patients consulting for abnormal uterine bleeding during pre- and post menopause. All patients underwent an hysterography and transvaginal sonohysterography, in random order, followed by an hysteroscopy with histological sample. The results were compared with the histo-pathological examination that was used for reference diagnosis. Statistical study of sensitivity, specificity and Positive and Negative Predictive Value (PPV-NPV) of each investigation; rate of agreement by the coefficient of Kappa. The hysterography offers a PPV of 83% and a NPV of 100%. The interpretation errors were associated with the simple mucous hypertrophy interpreted as "hyperplasy". The limits correspond to a contrast agent allergy. The sonohysterography had a VPP of 89% and a VPN of 100%. The false positive is due to the difficulties of distinguishing the clots from the polyps. The limits correspond to the difficulties of cervix catheterization (13%). As regards the hysteroscopy, the VPP was 81.5% and the VPN of 75%. The interpretation mistakes were associated with mucous hypertrophy and the hyperplasy. The most useful examination for abnormal uterine bleeding, in the first instance, is transvaginal sonography with saline instillation. A complement by Doppler study would probably make it possible to limit the false positives.

  4. False Position, Double False Position and Cramer's Rule

    ERIC Educational Resources Information Center

    Boman, Eugene

    2009-01-01

    We state and prove the methods of False Position (Regula Falsa) and Double False Position (Regula Duorum Falsorum). The history of both is traced from ancient Egypt and China through the work of Fibonacci, ending with a connection between Double False Position and Cramer's Rule.

  5. Accounting for geophysical information in geostatistical characterization of unexploded ordnance (UXO) sites.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Hirotaka; Goovaerts, Pierre; McKenna, Sean Andrew

    2003-06-01

    Efficient and reliable unexploded ordnance (UXO) site characterization is needed for decisions regarding future land use. There are several types of data available at UXO sites and geophysical signal maps are one of the most valuable sources of information. Incorporation of such information into site characterization requires a flexible and reliable methodology. Geostatistics allows one to account for exhaustive secondary information (i.e.,, known at every location within the field) in many different ways. Kriging and logistic regression were combined to map the probability of occurrence of at least one geophysical anomaly of interest, such as UXO, from a limited numbermore » of indicator data. Logistic regression is used to derive the trend from a geophysical signal map, and kriged residuals are added to the trend to estimate the probabilities of the presence of UXO at unsampled locations (simple kriging with varying local means or SKlm). Each location is identified for further remedial action if the estimated probability is greater than a given threshold. The technique is illustrated using a hypothetical UXO site generated by a UXO simulator, and a corresponding geophysical signal map. Indicator data are collected along two transects located within the site. Classification performances are then assessed by computing proportions of correct classification, false positive, false negative, and Kappa statistics. Two common approaches, one of which does not take any secondary information into account (ordinary indicator kriging) and a variant of common cokriging (collocated cokriging), were used for comparison purposes. Results indicate that accounting for exhaustive secondary information improves the overall characterization of UXO sites if an appropriate methodology, SKlm in this case, is used.« less

  6. Bayesian adaptive survey protocols for resource management

    USGS Publications Warehouse

    Halstead, Brian J.; Wylie, Glenn D.; Coates, Peter S.; Casazza, Michael L.

    2011-01-01

    Transparency in resource management decisions requires a proper accounting of uncertainty at multiple stages of the decision-making process. As information becomes available, periodic review and updating of resource management protocols reduces uncertainty and improves management decisions. One of the most basic steps to mitigating anthropogenic effects on populations is determining if a population of a species occurs in an area that will be affected by human activity. Species are rarely detected with certainty, however, and falsely declaring a species absent can cause improper conservation decisions or even extirpation of populations. We propose a method to design survey protocols for imperfectly detected species that accounts for multiple sources of uncertainty in the detection process, is capable of quantitatively incorporating expert opinion into the decision-making process, allows periodic updates to the protocol, and permits resource managers to weigh the severity of consequences if the species is falsely declared absent. We developed our method using the giant gartersnake (Thamnophis gigas), a threatened species precinctive to the Central Valley of California, as a case study. Survey date was negatively related to the probability of detecting the giant gartersnake, and water temperature was positively related to the probability of detecting the giant gartersnake at a sampled location. Reporting sampling effort, timing and duration of surveys, and water temperatures would allow resource managers to evaluate the probability that the giant gartersnake occurs at sampled sites where it is not detected. This information would also allow periodic updates and quantitative evaluation of changes to the giant gartersnake survey protocol. Because it naturally allows multiple sources of information and is predicated upon the idea of updating information, Bayesian analysis is well-suited to solving the problem of developing efficient sampling protocols for species of conservation concern.

  7. Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms

    PubMed Central

    Yang, Fan; Xiao, Deyun; Shah, Sirish L.

    2009-01-01

    To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524

  8. Detection of nuclear resonance signals: modification of the receiver operating characteristics using feedback.

    PubMed

    Blauch, A J; Schiano, J L; Ginsberg, M D

    2000-06-01

    The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.

  9. Probability of the Physical Association of 104 Blended Companions to Kepler Objects of Interest Using Visible and Near-infrared Adaptive Optics Photometry

    NASA Astrophysics Data System (ADS)

    Atkinson, Dani; Baranec, Christoph; Ziegler, Carl; Law, Nicholas; Riddle, Reed; Morton, Tim

    2017-01-01

    We determine probabilities of physical association for stars in blended Kepler Objects of Interest (KOIs), and find that 14.5{ % }-3.4 % +3.8 % of companions within ˜4″ are consistent with being physically unassociated with their primary. This produces a better understanding of potential false positives in the Kepler catalog and will guide models of planet formation in binary systems. Physical association is determined through two methods of calculating multi-band photometric parallax using visible and near-infrared adaptive optics observations of 84 KOI systems with 104 contaminating companions within ˜4″. We find no evidence that KOI companions with separations of less than 1″ are more likely to be physically associated than KOI companions generally. We also reinterpret transit depths for 94 planet candidates, and calculate that 2.6% ± 0.4% of transits have R> 15{R}\\oplus , which is consistent with prior modeling work.

  10. Systems Approach to Defeating Maritime Improvised Explosive Devices in U.S. Ports

    DTIC Science & Technology

    2008-12-01

    DETECTION Pfi PROBABILITY OF FALSE IDENTIFICATION PHPK PROBABILITY OF HIT/PROBABILITY OF KILL PMA POST MISSION ANALYSIS PNNL PACIFIC...16 Naval Warfare Publication 27-2(Rev. B), Section 1.8.4.1(unclassified) 42 detection analysis is conducted...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA Approved for public release; distribution is unlimited Prepared

  11. Cushing's Syndrome: Where and How to Find It.

    PubMed

    Debono, Miguel; Newell-Price, John D

    2016-01-01

    The diagnosis of Cushing's syndrome is challenging to endocrinologists as patients often present with an insidious history, together with subtle external clinical features. Moreover, complications of endogenous hypercortisolism, such as visceral obesity, diabetes, hypertension and osteoporosis, are conditions commonly found in the population, and discerning whether these are truly a consequence of hypercortisolism is not straightforward. To avoid misdiagnosis, a careful investigative approach is essential. The investigation of Cushing's syndrome is a three-step process. Firstly, after exclusion of exogenous glucocorticoid use, the decision to initiate investigations should be based on whether there is a clinical index of suspicion of the disease. Specific signs of endogenous hypercortisolism raise the a priori probability of a truly positive test. Secondly, if the probability of hypercortisolism is high, one should carry out specific tests as indicated by Endocrine Society guidelines. Populations with non-distinguishing features of Cushing's syndrome should not be screened routinely as biochemical tests have a high false-positive rate if used indiscriminately. Thirdly, once hypercortisolism is confirmed, one should move to establish the cause. This usually entails distinguishing between adrenal or pituitary-related causes and the remoter possibility of the ectopic adrenocorticotropic hormone syndrome. It is crucial that the presence of Cushing's syndrome is established before any attempt at differential diagnosis. © 2016 S. Karger AG, Basel.

  12. A paperless autoimmunity laboratory: myth or reality?

    PubMed

    Lutteri, Laurence; Dierge, Laurine; Pesser, Martine; Watrin, Pascale; Cavalier, Etienne

    2016-08-01

    Testing for antinuclear antibodies is the most frequently prescribed analysis for the diagnosis of rheumatic diseases. Indirect immunofluorescence remains the gold standard method for their detection despite the increasing use of alternative techniques. In order to standardize the manual microscopy reading, automated acquisition and interpretation systems have emerged. This publication enables us to present our method of interpretation and characterization of antinuclear antibodies based on a cascade of analyses and to share our everyday experience of the G Sight from Menarini. The positive/negative discrimination on Hep cells 2000 is correct in 85% of the cases. In most of the false negative results, it is a question of aspecific or low titers patterns, but a few cases of SSA speckled patterns of low titers demonstrated a probability index below 8. Regarding the pattern recognition, some types and mixed patterns are not properly recognized. Concerning the probability index correlated in some studies to final titer, the weak fluorescence of certain patterns and the random presence of artifacts that distort the index don't lead us to continue it in our daily practice. In conclusion, automated reading systems facilitate the reporting of results and traceability of patterns but still require the expertise of a laboratory technologist for positive/negative discrimination and for pattern recognition.

  13. The impact of joint responses of devices in an airport security system.

    PubMed

    Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li

    2009-02-01

    In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.

  14. Levels of Office Blood Pressure and Their Operating Characteristics for Detecting Masked Hypertension Based on Ambulatory Blood Pressure Monitoring

    PubMed Central

    Lin, Feng-Chang; Tuttle, Laura A.; Shimbo, Daichi; Diaz, Keith M.; Olsson, Emily; Stankevitz, Kristin; Hinderliter, Alan L.

    2015-01-01

    BACKGROUND Masked hypertension (MH)—nonelevated office blood pressure (BP) with elevated out-of-office BP average—conveys cardiovascular risk similar to or approaching sustained hypertension, making its detection of potential clinical importance. However, it may not be feasible or cost-effective to perform ambulatory BP monitoring (ABPM) on all patients with a nonelevated office BP. There likely exists a level of office BP below which ABPM is not warranted because the probability of MH is low. METHODS We analyzed data from 294 adults aged ≥30 years not on BP-lowering medication with office BP <140/90mm Hg, all of whom underwent 24-hour ABPM. We calculated sensitivity, false-positive rate, and likelihood ratios (LRs) for the range of office BP cutoffs from 110 to 138mm Hg systolic and from 68 to 88mm Hg diastolic for detecting MH. RESULTS The systolic BP cutoff with the highest +LR for detecting MH (1.8) was 120mm Hg, and the diastolic cutoff with the highest +LR (2.4) was 82mm Hg. However, the systolic level of 120mm Hg had a false-positive rate of 42%, and the diastolic level of 82mm Hg had a sensitivity of only 39%. CONCLUSIONS The cutoff of office BP with the best overall operating characteristics for diagnosing MH is approximately 120/82mm Hg. However, this cutoff may have an unacceptably high false-positive rate. Clinical risk tools to identify patients with nonelevated office BP for whom ABPM should be considered will likely need to include factors in addition to office BP. PMID:24898379

  15. Towards personalized screening: cumulative risk of breast cancer screening outcomes in women with and without a first-degree relative with a history of breast cancer

    PubMed Central

    Ripping, T.M.; Hubbard, R.A.; Otten, J.D.M.; den Heeten, G.J.; Verbeek, A.L.M.; Broeders, M.J.M.

    2016-01-01

    Several reviews have estimated the balance of benefits and harms of mammographic screening in the general population. The balance may, however, differ between individuals with and without family history. Therefore, our aim is to assess the cumulative risk of screening outcomes; screen-detected breast cancer, interval cancer, and false-positive results, in women screenees aged 50–75 and 40–75, with and without a first-degree relative with a history of breast cancer at the start of screening. Data on screening attendance, recall and breast cancer detection were collected for each woman living in Nijmegen (the Netherlands) since 1975. We used a discrete time survival model to calculate the cumulative probability of each major screening outcome over 19 screening rounds. Women with a family history of breast cancer had a higher risk of all screening outcomes. For women screened from age 50–75, the cumulative risk of screen-detected breast cancer, interval cancer and false-positive results were 9.0%, 4.4% and 11.1% for women with a family history and 6.3%, 2.7% and 7.3% for women without a family history, respectively. The results for women 40–75 followed the same pattern for women screened 50–75 for cancer outcomes, but were almost doubled for false-positive results. To conclude, women with a first-degree relative with a history of breast cancer are more likely to experience benefits and harms of screening than women without a family history. To complete the balance and provide risk-based screening recommendations, the breast cancer mortality reduction and overdiagnosis should be estimated for family history subgroups. PMID:26537645

  16. Towards personalized screening: Cumulative risk of breast cancer screening outcomes in women with and without a first-degree relative with a history of breast cancer.

    PubMed

    Ripping, Theodora Maria; Hubbard, Rebecca A; Otten, Johannes D M; den Heeten, Gerard J; Verbeek, André L M; Broeders, Mireille J M

    2016-04-01

    Several reviews have estimated the balance of benefits and harms of mammographic screening in the general population. The balance may, however, differ between individuals with and without family history. Therefore, our aim is to assess the cumulative risk of screening outcomes; screen-detected breast cancer, interval cancer, and false-positive results, in women screenees aged 50-75 and 40-75, with and without a first-degree relative with a history of breast cancer at the start of screening. Data on screening attendance, recall and breast cancer detection were collected for each woman living in Nijmegen (The Netherlands) since 1975. We used a discrete time survival model to calculate the cumulative probability of each major screening outcome over 19 screening rounds. Women with a family history of breast cancer had a higher risk of all screening outcomes. For women screened from age 50-75, the cumulative risk of screen-detected breast cancer, interval cancer and false-positive results were 9.0, 4.4 and 11.1% for women with a family history and 6.3, 2.7 and 7.3% for women without a family history, respectively. The results for women 40-75 followed the same pattern for women screened 50-75 for cancer outcomes, but were almost doubled for false-positive results. To conclude, women with a first-degree relative with a history of breast cancer are more likely to experience benefits and harms of screening than women without a family history. To complete the balance and provide risk-based screening recommendations, the breast cancer mortality reduction and overdiagnosis should be estimated for family history subgroups. © 2015 UICC.

  17. Great SEP events and space weather: 1. Experience of automatically searching for event beginnings; probabilities of false and missed events

    NASA Astrophysics Data System (ADS)

    Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor

    It is well known that during great SEP events, fluxes of energetic particles can be so big that the memory of computers and other electronics in space may be destroyed, and satellites and spacecraft may cease to function. According to the NOAA Space Weather Prediction Cen-ter, the following scales constitute dangerous solar radiation storms: S5-extreme (flux level of particles with energy ∼ 10 MeV more than 105 ); S4 - severe(f luxmorethan104 ); andS3 - strong(f luxmorethan103 ). In these persiods, it is necessary to switch off some of the electronics for a few hours energy particles (meaning those with a few GeV/nucleon and higher), whose transportation to Earthfrom the S20 minutes after they accelerate and escape into the solar wind) than the main bulk of the smaller energy particle 60 minutes later). Here we describe the principles and experience of the automatic function of the "SEP - Search" program. The positive result, showing the exact beginning of an SEP event on the Emilio Segre Observ 10.8GV ), is determined now automatically by simultaneously increasing by 2.5 St.Dev. in two sections of the ne search "programnext uses 1-mindata for checking whether or not the observed increase reflects the beginning Research "automatically starts to work online. We determine also the probabilities of false and missed alerts.

  18. Analysis of binary responses with outcome-specific misclassification probability in genome-wide association studies.

    PubMed

    Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E

    2016-01-01

    Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.

  19. All That Glisters Is Not Gold: Sampling-Process Uncertainty in Disease-Vector Surveys with False-Negative and False-Positive Detections

    PubMed Central

    Abad-Franch, Fernando; Valença-Barbosa, Carolina; Sarquis, Otília; Lima, Marli M.

    2014-01-01

    Background Vector-borne diseases are major public health concerns worldwide. For many of them, vector control is still key to primary prevention, with control actions planned and evaluated using vector occurrence records. Yet vectors can be difficult to detect, and vector occurrence indices will be biased whenever spurious detection/non-detection records arise during surveys. Here, we investigate the process of Chagas disease vector detection, assessing the performance of the surveillance method used in most control programs – active triatomine-bug searches by trained health agents. Methodology/Principal Findings Control agents conducted triplicate vector searches in 414 man-made ecotopes of two rural localities. Ecotope-specific ‘detection histories’ (vectors or their traces detected or not in each individual search) were analyzed using ordinary methods that disregard detection failures and multiple detection-state site-occupancy models that accommodate false-negative and false-positive detections. Mean (±SE) vector-search sensitivity was ∼0.283±0.057. Vector-detection odds increased as bug colonies grew denser, and were lower in houses than in most peridomestic structures, particularly woodpiles. False-positive detections (non-vector fecal streaks misidentified as signs of vector presence) occurred with probability ∼0.011±0.008. The model-averaged estimate of infestation (44.5±6.4%) was ∼2.4–3.9 times higher than naïve indices computed assuming perfect detection after single vector searches (11.4–18.8%); about 106–137 infestation foci went undetected during such standard searches. Conclusions/Significance We illustrate a relatively straightforward approach to addressing vector detection uncertainty under realistic field survey conditions. Standard vector searches had low sensitivity except in certain singular circumstances. Our findings suggest that many infestation foci may go undetected during routine surveys, especially when vector density is low. Undetected foci can cause control failures and induce bias in entomological indices; this may confound disease risk assessment and mislead program managers into flawed decision making. By helping correct bias in naïve indices, the approach we illustrate has potential to critically strengthen vector-borne disease control-surveillance systems. PMID:25233352

  20. Kepler Planet Reliability Metrics: Astrophysical Positional Probabilities for Data Release 25

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Morton, Timothy D.

    2017-01-01

    This document is very similar to KSCI-19092-003, Planet Reliability Metrics: Astrophysical Positional Probabilities, which describes the previous release of the astrophysical positional probabilities for Data Release 24. The important changes for Data Release 25 are:1. The computation of the astrophysical positional probabilities uses the Data Release 25 processed pixel data for all Kepler Objects of Interest.2. Computed probabilities now have associated uncertainties, whose computation is described in x4.1.3.3. The scene modeling described in x4.1.2 uses background stars detected via ground-based high-resolution imaging, described in x5.1, that are not in the Kepler Input Catalog or UKIRT catalog. These newly detected stars are presented in Appendix B. Otherwise the text describing the algorithms and examples is largely unchanged from KSCI-19092-003.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipping, David M.; Chen, Jingjing; Sandford, Emily

    The analysis of Proxima Centauri’s radial velocities recently led Anglada-Escudé et al. to claim the presence of a low-mass planet orbiting the Sun’s nearest star once every 11.2 days. Although the a priori probability that Proxima b transits its parent star is just 1.5%, the potential impact of such a discovery would be considerable. Independent of recent radial velocity efforts, we observed Proxima Centauri for 12.5 days in 2014 and 31 days in 2015 with the Microwave and Oscillations of Stars space telescope. We report here that we cannot make a compelling case that Proxima b transits in our precisemore » photometric time series. Imposing an informative prior on the period and phase, we do detect a candidate signal with the expected depth. However, perturbing the phase prior across 100 evenly spaced intervals reveals one strong false positive and one weaker instance. We estimate a false-positive rate of at least a few percent and a much higher false-negative rate of 20%–40%, likely caused by the very high flare rate of Proxima Centauri. Comparing our candidate signal to HATSouth ground-based photometry reveals that the signal is somewhat, but not conclusively, disfavored (1 σ –2 σ ), leading us to argue that the signal is most likely spurious. We expect that infrared photometric follow-up could more conclusively test the existence of this candidate signal, owing to the suppression of flare activity and the impressive infrared brightness of the parent star.« less

  2. No Conclusive Evidence for Transits of Proxima b in MOST Photometry

    NASA Astrophysics Data System (ADS)

    Kipping, David M.; Cameron, Chris; Hartman, Joel D.; Davenport, James R. A.; Matthews, Jaymie M.; Sasselov, Dimitar; Rowe, Jason; Siverd, Robert J.; Chen, Jingjing; Sandford, Emily; Bakos, Gáspár Á.; Jordán, Andrés; Bayliss, Daniel; Henning, Thomas; Mancini, Luigi; Penev, Kaloyan; Csubry, Zoltan; Bhatti, Waqas; Da Silva Bento, Joao; Guenther, David B.; Kuschnig, Rainer; Moffat, Anthony F. J.; Rucinski, Slavek M.; Weiss, Werner W.

    2017-03-01

    The analysis of Proxima Centauri’s radial velocities recently led Anglada-Escudé et al. to claim the presence of a low-mass planet orbiting the Sun’s nearest star once every 11.2 days. Although the a priori probability that Proxima b transits its parent star is just 1.5%, the potential impact of such a discovery would be considerable. Independent of recent radial velocity efforts, we observed Proxima Centauri for 12.5 days in 2014 and 31 days in 2015 with the Microwave and Oscillations of Stars space telescope. We report here that we cannot make a compelling case that Proxima b transits in our precise photometric time series. Imposing an informative prior on the period and phase, we do detect a candidate signal with the expected depth. However, perturbing the phase prior across 100 evenly spaced intervals reveals one strong false positive and one weaker instance. We estimate a false-positive rate of at least a few percent and a much higher false-negative rate of 20%-40%, likely caused by the very high flare rate of Proxima Centauri. Comparing our candidate signal to HATSouth ground-based photometry reveals that the signal is somewhat, but not conclusively, disfavored (1σ-2σ), leading us to argue that the signal is most likely spurious. We expect that infrared photometric follow-up could more conclusively test the existence of this candidate signal, owing to the suppression of flare activity and the impressive infrared brightness of the parent star.

  3. Substantial underreporting of anastomotic leakage after anterior resection for rectal cancer in the Swedish Colorectal Cancer Registry.

    PubMed

    Rutegård, Martin; Kverneng Hultberg, Daniel; Angenete, Eva; Lydrup, Marie-Louise

    2017-12-01

    The causes and effects of anastomotic leakage after anterior resection are difficult to study in small samples and have thus been evaluated using large population-based national registries. To assess the accuracy of such research, registries should be validated continuously. Patients who underwent anterior resection for rectal cancer during 2007-2013 in 15 different hospitals in three healthcare regions in Sweden were included in the study. Registry data and information from patient records were retrieved. Registered anastomotic leakage within 30 postoperative days was evaluated, using all available registry data and using only the main variable anastomotic insufficiency. With the consensus definition of anastomotic leakage developed by the International Study Group on Rectal Cancer as reference, validity measures were calculated. Some 1507 patients were included in the study. The negative and positive predictive values for registered anastomotic leakage were 96 and 88%, respectively, while the κ-value amounted to 0.76. The false-negative rate was 29%, whereas the false-positive rate reached 1.3% (the vast majority consisting of actual leaks, but occurring after postoperative day 30). Using the main variable anastomotic insufficiency only, the false-negative rate rose to 41%. There is considerable underreporting of anastomotic leakage after anterior resection for rectal cancer in the Swedish Colorectal Cancer Registry. It is probable that this causes an underestimation of the true effects of leakage on patient outcomes, and further quality control is needed.

  4. Influencing clinicians and healthcare managers: can ROC be more persuasive?

    NASA Astrophysics Data System (ADS)

    Taylor-Phillips, S.; Wallis, M. G.; Duncan, A.; Gale, A. G.

    2010-02-01

    Receiver Operating Characteristic analysis provides a reliable and cost effective performance measurement tool, without using full clinical trials. However, when ROC analysis shows that performance is statistically superior in one condition than another it is difficult to relate this result to effects in practice, or even to determine whether it is clinically significant. In this paper we present two concurrent analyses: using ROC methods alongside single threshold recall rate data, and suggest that reporting both provides complimentary data. Four mammographers read 160 difficult cases (41% malignant) twice, with and without prior mammograms. Lesion location and probability of malignancy was reported for each case and analyzed using JAFROC. Concurrently each participant chose recall or return to screen for each case. JAFROC analysis showed that the presence of prior mammograms improved performance (p<.05). Single threshold data showed a trend towards a 26% increase in the number of false positive recalls without prior mammograms (p=.056). If this trend were present throughout the NHS Breast Screening Programme then discarding prior mammograms would correspond to an increase in recall rate from 4.6% to 5.3%, and 12,414 extra women recalled annually for assessment. Whilst ROC methods account for all possible thresholds of recall and have higher power, providing a single threshold example of false positive, false negative, and recall rates when reporting results could be more influential for clinicians. This paper discusses whether this is a useful additional method of presenting data, or whether it is misleading and inaccurate.

  5. Analysis of femtosecond pump-probe photoelectron-photoion coincidence measurements applying Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Rumetshofer, M.; Heim, P.; Thaler, B.; Ernst, W. E.; Koch, M.; von der Linden, W.

    2018-06-01

    Ultrafast dynamical processes in photoexcited molecules can be observed with pump-probe measurements, in which information about the dynamics is obtained from the transient signal associated with the excited state. Background signals provoked by pump and/or probe pulses alone often obscure these excited-state signals. Simple subtraction of pump-only and/or probe-only measurements from the pump-probe measurement, as commonly applied, results in a degradation of the signal-to-noise ratio and, in the case of coincidence detection, the danger of overrated background subtraction. Coincidence measurements additionally suffer from false coincidences, requiring long data-acquisition times to keep erroneous signals at an acceptable level. Here we present a probabilistic approach based on Bayesian probability theory that overcomes these problems. For a pump-probe experiment with photoelectron-photoion coincidence detection, we reconstruct the interesting excited-state spectrum from pump-probe and pump-only measurements. This approach allows us to treat background and false coincidences consistently and on the same footing. We demonstrate that the Bayesian formalism has the following advantages over simple signal subtraction: (i) the signal-to-noise ratio is significantly increased, (ii) the pump-only contribution is not overestimated, (iii) false coincidences are excluded, (iv) prior knowledge, such as positivity, is consistently incorporated, (v) confidence intervals are provided for the reconstructed spectrum, and (vi) it is applicable to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, the Bayesian approach allows us to run experiments at higher ionization rates, resulting in a significant reduction of data acquisition times. The probabilistic approach is thoroughly scrutinized by challenging mock data. The application to pump-probe coincidence measurements on acetone molecules enables quantitative interpretations about the molecular decay dynamics and fragmentation behavior. All results underline the superiority of a consistent probabilistic approach over ad hoc estimations.

  6. Evaluation of usefulness of fine-needle aspiration cytology in the diagnosis of tumours of the accessory parotid gland: a preliminary analysis of a case series in Japan.

    PubMed

    Iguchi, Hiroyoshi; Wada, Tadashi; Matsushita, Naoki; Oishi, Masahiro; Teranishi, Yuichi; Yamane, Hideo

    2014-07-01

    The accuracy and sensitivity of fine-needle aspiration cytology (FNAC) in this analysis were not satisfactory, and the false-negative rate seemed to be higher than for parotid tumours. The possibility of low-grade malignancy should be considered in the surgical treatment of accessory parotid gland (APG) tumours, even if the preoperative results of FNAC suggest that the tumour is benign. Little is known about the usefulness of FNAC in the preoperative evaluation of APG tumours, probably due to the paucity of APG tumour cases. We examined the usefulness of FNAC in the detection of malignant APG tumours. We conducted a retrospective analysis of 3 cases from our hospital, along with 18 previously reported Japanese cases. We compared the preoperative FNAC results with postoperative histopathological diagnoses of APG tumours and evaluated the accuracy, sensitivity, specificity and false-negative rates of FNAC in detecting malignant APG tumours. There were four false-negative cases (19.0%), three of mucoepidermoid carcinomas and one of malignant lymphoma. One false-positive result was noted in the case of a myoepithelioma, which was cytologically diagnosed as suspected adenoid cystic carcinoma. The accuracy, sensitivity and specificity of FNAC in detecting malignant tumours were 76.2%, 60.0% and 90.9%, respectively.

  7. Ticks and tick-borne diseases in Oklahoma.

    PubMed

    Moody, E K; Barker, R W; White, J L; Crutcher, J M

    1998-11-01

    Tick-borne diseases are common in Oklahoma, especially the eastern part of the state where tick prevalence is highest. Three species of hard ticks are present in Oklahoma that are known vectors of human disease--the American dog tick (Rocky Mountain spotted fever; RMSF), the lone star tick (ehrlichiosis) and the black-legged tick (Lyme disease). Oklahoma consistently ranks among the top states in numbers of reported RMSF cases, and Ehrlichiosis may be as prevalent as RMSF. Although Lyme disease is frequently reported in Oklahoma, over-diagnosing of this disease due to false-positive test results is common; positive or equivocal screening tests should be confirmed by Western immunoblot. At present, it is unclear whether the disease seen here is Lyme disease or another Lyme-like disease. If true Lyme disease is present in the state, it is probably rare. Physicians should be aware of the most recent recommendations for diagnosis, therapy and prevention of tick-borne diseases.

  8. Microscopic or occult hematuria, when reflex testing is not good laboratory practice.

    PubMed

    Froom, Paul; Barak, Mira

    2010-01-01

    Consensus opinion suggests that hematuria found by dipstick and not confirmed on microscopic examination (<2 erythrocytes per high power field) signifies a false-positive reagent strip test result. Standard practice is to repeat the dipstick test several days later and if still positive to confirm by microscopic examination. If discordant results are obtained, experts recommend reflex testing for urinary myoglobin and hemoglobin concentrations. The question is whether or not this approach represents good laboratory practice. These recommendations are not evidence based. We conclude that the reference range for red blood cells on the reagent strip should be increased to 25x10(6) cells/L for young men, and 50x10(6) cells/L for the rest of the adult population, ranges consistent with flow cytometry reports. Confirmation reflex testing using tests that have inferior sensitivity, precision and probably accuracy is not recommended.

  9. Occupancy models for data with false positive and false negative errors and heterogeneity across sites and surveys

    Treesearch

    Paige F.B. Ferguson; Michael J. Conroy; Jeffrey Hepinstall-Cymerman; Nigel Yoccoz

    2015-01-01

    False positive detections, such as species misidentifications, occur in ecological data, although many models do not account for them. Consequently, these models are expected to generate biased inference.The main challenge in an analysis of data with false positives is to distinguish false positive and false negative...

  10. Feasibility of automated visual field examination in children between 5 and 8 years of age.

    PubMed Central

    Safran, A. B.; Laffi, G. L.; Bullinger, A.; Viviani, P.; de Weisse, C.; Désangles, D.; Tschopp, C.; Mermoud, C.

    1996-01-01

    AIMS--To investigate how young children develop the ability to undergo a visual field evaluation using regular automated perimetry. METHODS--The study included 42 normal girls aged 5, 6, 7, and 8 years. Twelve locations in the 15 degrees eccentricity were tested in one eye, using an Octopus 2000R perimeter with a two level strategy. False positive and false negative catch trials were presented. The examination was performed three times in succession. Before the examination procedure, a specially designed programme was conducted for progressive familiarisation. RESULTS--During the familiarisation procedure, it was found that all of the 5-year-old children, seven of the 6-year-old children, and three of the 7-year-old children were unable to perform immediately, and correctly, the instructions given during the familiarisation phase; these children took from 30 seconds to 3 minutes to comply with the examiner's requests. With the exception of one 5-year-old child, all tested subjects completed the planned procedure. The mean proportion of false negative answers in catch trials was 1.6%. The mean proportion of false positive answers was 12.2%. The quadratic dependency on age suggested by the averages was not significant (F(3,116) = 0.88; p = 0.45). Detection stimulus improved with age, as shown by the fact that probability of perceiving dim stimulus increases significantly (F(3,116) = 12.68; p < 0.0001). CONCLUSION--Children did remarkably well regarding both the duration of the examination and the reliability of the answers. A preliminary familiarisation phase with a specially designed adaptation programme was found to be mandatory with children aged 7 or under. To our knowledge, this is the first time that such an investigation has been performed. PMID:8759261

  11. Cost Analysis of Cot-Side Screening Methods for Neonatal Hypoglycaemia.

    PubMed

    Glasgow, Matthew J; Harding, Jane E; Edlin, Richard

    2018-06-12

    Babies at risk of neonatal hypoglycaemia are often screened using cot-side glucometers, but non-enzymatic glucometers are inaccurate, potentially resulting in over-treatment and under-treatment, and low values require laboratory confirmation. More accurate enzymatic glucometers are available but at apparently higher costs. Our objective was to compare the cost of screening for neonatal hypoglycaemia using point-of-care enzymatic and non-enzymatic glucometers. We used a decision tree to model costs, including consumables and staff time. Sensitivity analyses assessed the impact of staff time, staff costs, probability that low results are confirmed via laboratory testing, false-positive and false-negative rates of non-enzymatic glucometers, and the blood glucose concentration threshold. In the primary analysis, screening using an enzymatic glucometer cost NZD 86.94 (USD 63.47) while using a non-enzymatic glucometer cost NZD 97.08 (USD 70.87) per baby. Sensitivity analyses showed that using an enzymatic glucometer is cost saving with wide variations in staff time and costs, irrespective of the false-positive level of non-enzymatic glucometers, and where ≥78% of low values are laboratory confirmed. Where non-enzymatic glucometers may be less costly (e.g., false-negative rate exceeds 15%), instances of hypoglycaemia will be missed. Reducing the blood glucose concentration threshold to 1.94 mmol/L reduced the incidence of hypoglycaemia from 52 to 13%, and the cost of screening using a non-enzymatic glucometer to NZD 47.71 (USD 34.83). In view of their lower cost in most circumstances and greater accuracy, enzymatic glucometers should be routinely utilised for point-of-care screening for neonatal hypoglycaemia. © 2018 S. Karger AG, Basel.

  12. A two-stage cognitive theory of the positive symptoms of psychosis. Highlighting the role of lowered decision thresholds.

    PubMed

    Moritz, Steffen; Pfuhl, Gerit; Lüdtke, Thies; Menon, Mahesh; Balzan, Ryan P; Andreou, Christina

    2017-09-01

    We outline a two-stage heuristic account for the pathogenesis of the positive symptoms of psychosis. A narrative review on the empirical evidence of the liberal acceptance (LA) account of positive symptoms is presented. At the heart of our theory is the idea that psychosis is characterized by a lowered decision threshold, which results in the premature acceptance of hypotheses that a nonpsychotic individual would reject. Once the hypothesis is judged as valid, counterevidence is not sought anymore due to a bias against disconfirmatory evidence as well as confirmation biases, consolidating the false hypothesis. As a result of LA, confidence in errors is enhanced relative to controls. Subjective probabilities are initially low for hypotheses in individuals with delusions, and delusional ideas at stage 1 (belief formation) are often fragile. In the course of the second stage (belief maintenance), fleeting delusional ideas evolve into fixed false beliefs, particularly if the delusional idea is congruent with the emotional state and provides "meaning". LA may also contribute to hallucinations through a misattribution of (partially) normal sensory phenomena. Interventions such as metacognitive training that aim to "plant the seeds of doubt" decrease positive symptoms by encouraging individuals to seek more information and to attenuate confidence. The effect of antipsychotic medication is explained by its doubt-inducing properties. The model needs to be confirmed by longitudinal designs that allow an examination of causal relationships. Evidence is currently weak for hallucinations. The theory may account for positive symptoms in a subgroup of patients. Future directions are outlined. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Fusion of local and global detection systems to detect tuberculosis in chest radiographs.

    PubMed

    Hogeweg, Laurens; Mol, Christian; de Jong, Pim A; Dawson, Rodney; Ayles, Helen; van Ginneken, Bramin

    2010-01-01

    Automatic detection of tuberculosis (TB) on chest radiographs is a difficult problem because of the diverse presentation of the disease. A combination of detection systems for abnormalities and normal anatomy is used to improve detection performance. A textural abnormality detection system operating at the pixel level is combined with a clavicle detection system to suppress false positive responses. The output of a shape abnormality detection system operating at the image level is combined in a next step to further improve performance by reducing false negatives. Strategies for combining systems based on serial and parallel configurations were evaluated using the minimum, maximum, product, and mean probability combination rules. The performance of TB detection increased, as measured using the area under the ROC curve, from 0.67 for the textural abnormality detection system alone to 0.86 when the three systems were combined. The best result was achieved using the sum and product rule in a parallel combination of outputs.

  14. Quantum Biometrics with Retinal Photon Counting

    NASA Astrophysics Data System (ADS)

    Loulakis, M.; Blatsios, G.; Vrettou, C. S.; Kominis, I. K.

    2017-10-01

    It is known that the eye's scotopic photodetectors, rhodopsin molecules, and their associated phototransduction mechanism leading to light perception, are efficient single-photon counters. We here use the photon-counting principles of human rod vision to propose a secure quantum biometric identification based on the quantum-statistical properties of retinal photon detection. The photon path along the human eye until its detection by rod cells is modeled as a filter having a specific transmission coefficient. Precisely determining its value from the photodetection statistics registered by the conscious observer is a quantum parameter estimation problem that leads to a quantum secure identification method. The probabilities for false-positive and false-negative identification of this biometric technique can readily approach 10-10 and 10-4, respectively. The security of the biometric method can be further quantified by the physics of quantum measurements. An impostor must be able to perform quantum thermometry and quantum magnetometry with energy resolution better than 10-9ℏ , in order to foil the device by noninvasively monitoring the biometric activity of a user.

  15. Hydrogen breath test in schoolchildren.

    PubMed Central

    Douwes, A C; Schaap, C; van der Klei-van Moorsel, J M

    1985-01-01

    The frequency of negative hydrogen breath tests due to colonic bacterial flora which are unable to produce hydrogen was determined after oral lactulose challenge in 98 healthy Dutch schoolchildren. There was a negative result in 9.2%. The probability of a false normal lactose breath test (1:77) was calculated from these results together with those from a separate group of children with lactose malabsorption (also determined by hydrogen breath test). A study of siblings and mothers of subjects with a negative breath test did not show familial clustering of this condition. Faecal incubation tests with various sugars showed an increase in breath hydrogen greater than 100 parts per million in those with a positive breath test while subjects with a negative breath test also had a negative faecal incubation test. The frequency of a false negative hydrogen breath test was higher than previously reported, but this does not affect the superiority of this method of testing over the conventional blood glucose determination. PMID:4004310

  16. Mapping of Synaptic-Neuronal Impairment on the Brain Surface through Fluctuation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musha, Toshimitsu; Kurachi, Takayoshi; Suzuki, Naohoro

    2005-08-25

    Increase of demented population year by year is becoming a serious social problem to be solved urgently. The most effective way to block this increase is in its early detection by means of an inexpensive, non-invasive, sensitive, reliable and easy-to-operate diagnosis method. We have developed a method satisfying these requirements by using scalp potential fluctuations. We have collected 21ch EEG and SPECT data of 25 very mild Alzheimer's disease (AD) (MMSE=26{+-}1.8), moderately severe AD (MMSE=15.3{+-}6.4) and age-matched normal controls. As AD progresses, local synaptic-neuronal activity becomes abnormal, either more unstable or more inactive than in normal state. Such abnormality ismore » detected in terms of normalized power variance (NPV) of a scalp potential recorded with a scalp electrode. The z-score is defined by z = ((NPV of a subject) - (mean NPV of normal subjects))/(standard deviation of NPV of normal subjects). Correlation of a measured z-score map with the mean z-score map for AD patients characterizes likelihood to AD, in terms of which AD is discriminated from normal with 75% of true positive and 25% false negative probability. By introducing two thresholds, we have 90% of true positive and 10% of false negative discrimination.« less

  17. A preliminary investigation into the use of biosensors to screen stomach contents for selected poisons and drugs.

    PubMed

    Redshaw, Natalie; Dickson, Stuart J; Ambrose, Vikki; Horswell, Jacqui

    2007-10-25

    The bioluminescence response of two genetically modified (lux-marked) bacteria to potentially toxic compounds (PTCs) in stomach contents was monitored using an in vitro assay. Cells of Escherichia coli HB101 and Salmonella typhimurium both carrying the lux light producing gene on a plasmid (pUDC607) were added to stomach contents containing various concentrations of organic and inorganic compounds. There was some variability in the response of the two biosensors, but both were sensitive to the herbicides glyphosate, 2,4-dichlorophenoxyacetic acid (2,4-D), 2,4,5-trichlorophenoxyacetic acid (2,4,5-T); pentachlorophenol (PCP), and inorganic poisons arsenic and mercury at a concentration range likely to be found in stomach contents samples submitted for toxicological analysis. This study demonstrates that biosensor bioassays could be a useful preliminary screening tool in forensic toxicology and that such a toxicological screening should include more than one test organism to maximise the number of PTC's detected. The probability of false positive results from samples containing compounds that may interfere with the assay such as over-the-counter (OTC) drugs and caffeine in tea and coffee was also investigated. Of the substances tested only coffee has the potential to cause false positive results.

  18. Tandem mass spectrometry of human tryptic blood peptides calculated by a statistical algorithm and captured by a relational database with exploration by a general statistical analysis system.

    PubMed

    Bowden, Peter; Beavis, Ron; Marshall, John

    2009-11-02

    A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.

  19. External validation of EPIWIN biodegradation models.

    PubMed

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  20. Self-focusing quantum states

    NASA Astrophysics Data System (ADS)

    Villanueva, Anthony Allan D.

    2018-02-01

    We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.

  1. Improving computer-aided detection assistance in breast cancer screening by removal of obviously false-positive findings.

    PubMed

    Mordang, Jan-Jurre; Gubern-Mérida, Albert; Bria, Alessandro; Tortorella, Francesco; den Heeten, Gerard; Karssemeijer, Nico

    2017-04-01

    Computer-aided detection (CADe) systems for mammography screening still mark many false positives. This can cause radiologists to lose confidence in CADe, especially when many false positives are obviously not suspicious to them. In this study, we focus on obvious false positives generated by microcalcification detection algorithms. We aim at reducing the number of obvious false-positive findings by adding an additional step in the detection method. In this step, a multiclass machine learning method is implemented in which dedicated classifiers learn to recognize the patterns of obvious false-positive subtypes that occur most frequently. The method is compared to a conventional two-class approach, where all false-positive subtypes are grouped together in one class, and to the baseline CADe system without the new false-positive removal step. The methods are evaluated on an independent dataset containing 1,542 screening examinations of which 80 examinations contain malignant microcalcifications. Analysis showed that the multiclass approach yielded a significantly higher sensitivity compared to the other two methods (P < 0.0002). At one obvious false positive per 100 images, the baseline CADe system detected 61% of the malignant examinations, while the systems with the two-class and multiclass false-positive reduction step detected 73% and 83%, respectively. Our study showed that by adding the proposed method to a CADe system, the number of obvious false positives can decrease significantly (P < 0.0002). © 2017 American Association of Physicists in Medicine.

  2. Laser radar system for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Bers, Karlheinz; Schulz, Karl R.; Armbruster, Walter

    2005-09-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser radars which are build by the EADS company and presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from objects at distances of military relevance with a high hit-and-detect probability. The development of advanced 3d-scene analysis algorithms had increased the recognition probability and reduced the false alarm rate by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The sensor system and the implemented algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition. This paper describes different 3D-imaging ladar sensors with unique system architecture but different components matched for different military application. Emphasis is laid on an obstacle warning system with a high probability of detection of thin wires, the real time processing of the measured range image data, obstacle classification und visualization.

  3. Evaluation of Correlation between Pretest Probability for Clostridium difficile Infection and Clostridium difficile Enzyme Immunoassay Results

    PubMed Central

    Reske, Kimberly A.; Hink, Tiffany; Dubberke, Erik R.

    2016-01-01

    ABSTRACT The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. PMID:27927930

  4. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  5. Incorporating sequence information into the scoring function: a hidden Markov model for improved peptide identification.

    PubMed

    Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C

    2008-03-01

    The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.

  6. Directed Design of Experiments (DOE) for Determining Probability of Detection (POD) Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2007-01-01

    This viewgraph presentation reviews some of the issues that people who specialize in Non destructive evaluation (NDE) have with determining the statistics of the probability of detection. There is discussion of the use of the binominal distribution, and the probability of hit. The presentation then reviews the concepts of Directed Design of Experiments for Validating Probability of Detection of Inspection Systems (DOEPOD). Several cases are reviewed, and discussed. The concept of false calls is also reviewed.

  7. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.

  8. FAA Weather Surveillance Requirements in the Context on NEXRAD.

    DTIC Science & Technology

    1982-11-19

    NEXRAD Technical Requirements NUOC NEXRAD User’s Operations Concept NWS National Weather Service Pd Probability of Detection Pfa Probability of False...Winston 25. Minneapolis-St. Paul Intl. Salem Regional 26. Jacksonville International 54. Bradley International 27. Newark International 55. Roanoke

  9. Easy fix for clinical laboratories for the false-positive defect with the Abbott AxSym total beta-hCG test.

    PubMed

    Cole, Laurence A; Khanlian, Sarah A

    2004-05-01

    False-positive hCG results can lead to erroneous diagnoses and needless chemotherapy and surgery. In the last 2 years, eight publications described cases involving false-positive hCG tests; all eight involved the AxSym test. We investigated the source of this abundance of cases and a simple fix that may be used by clinical laboratories. False-positive hCG was primarily identified by absence of hCG in urine and varying or negative hCG results in alternative tests. Seventeen false-positive serum samples in the AxSym test were evaluated undiluted and at twofold dilution with diluent containing excess goat serum or immunoglobulin. We identified 58 patients with false-positive hCG, 47 of 58 due to the Abbott AxSym total hCGbeta test (81%). Sixteen of 17 of these "false-positive" results (mean 100 mIU/ml) became undetectable when tested again after twofold dilution. A simple twofold dilution with this diluent containing excess goat serum or immunoglobulin completely protected 16 of 17 samples from patients having false-positive results. It is recommended that laboratories using this test use twofold dilution as a minimum to prevent false-positive results.

  10. CT Colonography with Computer-aided Detection: Recognizing the Causes of False-Positive Reader Results

    PubMed Central

    Dachman, Abraham H.; Wroblewski, Kristen; Vannier, Michael W.; Horne, John M.

    2014-01-01

    Computed tomography (CT) colonography is a screening modality used to detect colonic polyps before they progress to colorectal cancer. Computer-aided detection (CAD) is designed to decrease errors of detection by finding and displaying polyp candidates for evaluation by the reader. CT colonography CAD false-positive results are common and have numerous causes. The relative frequency of CAD false-positive results and their effect on reader performance on the basis of a 19-reader, 100-case trial shows that the vast majority of CAD false-positive results were dismissed by readers. Many CAD false-positive results are easily disregarded, including those that result from coarse mucosa, reconstruction, peristalsis, motion, streak artifacts, diverticulum, rectal tubes, and lipomas. CAD false-positive results caused by haustral folds, extracolonic candidates, diminutive lesions (<6 mm), anal papillae, internal hemorrhoids, varices, extrinsic compression, and flexural pseudotumors are almost always recognized and disregarded. The ileocecal valve and tagged stool are common sources of CAD false-positive results associated with reader false-positive results. Nondismissable CAD soft-tissue polyp candidates larger than 6 mm are another common cause of reader false-positive results that may lead to further evaluation with follow-up CT colonography or optical colonoscopy. Strategies for correctly evaluating CAD polyp candidates are important to avoid pitfalls from common sources of CAD false-positive results. ©RSNA, 2014 PMID:25384290

  11. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, Marko; Lee, Eunghyun

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less

  12. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  13. Phaeochromocytoma: diagnostic challenges for biochemical screening and diagnosis.

    PubMed

    Barron, Jeffrey

    2010-08-01

    The aim of this article is to provide knowledge of the origin of catecholamines and metabolites so that there can be an informed approach to the methods for biochemical screening for a possible phaeochromocytoma; The article includes a review of catecholamine and metadrenaline metabolism, with methods used in biochemical screening. In the adrenal medulla and a phaeochromocytoma, catecholamines continuously leak from chromaffin granules into the cytoplasm and are converted to metadrenalines. For a phaeochromocytoma to become biochemically detectable, metnoradrenaline secretion needs to rise fourfold, whereas noradrenaline secretion needs to rise 15-fold. The prevalence of a sporadic phaeochromocytoma is low; therefore false-positive results exceed true-positive results. Assay sensitivity is high because it is important not to miss a possible phaeochromocytoma. The use of urine or plasma fractionated metadrenalines as the first-line test has been recommended due to improved sensitivity. A negative result excludes a phaeochromocytoma. Only after a sporadic phaeochromocytoma has been diagnosed biochemically is it cost effective to request imaging. Sensitivities and specificities of the assays differ according to pre-test probabilities of the presence of a phaeochromocytoma, with hereditary and incidentalomas having a higher pre-test probability than sporadic phaeochromocytoma. In conclusion, in screening for a possible phaeochromocytoma, biochemical investigations should be completed first to exclude or establish the diagnosis. The preferred biochemical screening test is fractionated metadrenalines, including methoxytyramine so as not to miss dopamine-secreting tumours.

  14. LipidFrag: Improving reliability of in silico fragmentation of lipids and application to the Caenorhabditis elegans lipidome

    PubMed Central

    Neumann, Steffen; Schmitt-Kopplin, Philippe

    2017-01-01

    Lipid identification is a major bottleneck in high-throughput lipidomics studies. However, tools for the analysis of lipid tandem MS spectra are rather limited. While the comparison against spectra in reference libraries is one of the preferred methods, these libraries are far from being complete. In order to improve identification rates, the in silico fragmentation tool MetFrag was combined with Lipid Maps and lipid-class specific classifiers which calculate probabilities for lipid class assignments. The resulting LipidFrag workflow was trained and evaluated on different commercially available lipid standard materials, measured with data dependent UPLC-Q-ToF-MS/MS acquisition. The automatic analysis was compared against manual MS/MS spectra interpretation. With the lipid class specific models, identification of the true positives was improved especially for cases where candidate lipids from different lipid classes had similar MetFrag scores by removing up to 56% of false positive results. This LipidFrag approach was then applied to MS/MS spectra of lipid extracts of the nematode Caenorhabditis elegans. Fragments explained by LipidFrag match known fragmentation pathways, e.g., neutral losses of lipid headgroups and fatty acid side chain fragments. Based on prediction models trained on standard lipid materials, high probabilities for correct annotations were achieved, which makes LipidFrag a good choice for automated lipid data analysis and reliability testing of lipid identifications. PMID:28278196

  15. The development and appraisal of a tool designed to find patients harmed by falsely labelled, falsified (counterfeit) medicines.

    PubMed

    Anđelković, Marija; Björnsson, Einar; De Bono, Virgilio; Dikić, Nenad; Devue, Katleen; Ferlin, Daniel; Hanževački, Miroslav; Jónsdóttir, Freyja; Shakaryan, Mkrtich; Walser, Sabine

    2017-06-20

    Falsely labelled, falsified (counterfeit) medicines (FFCm's) are produced or distributed illegally and can harm patients. Although the occurrence of FFCm's is increasing in Europe, harm is rarely reported. The European Directorate for the Quality of Medicines & Health-Care (EDQM) has therefore coordinated the development and validation of a screening tool. The tool consists of a questionnaire referring to a watch-list of FFCm's identified in Europe, including symptoms of their use and individual risk factors, and a scoring form. To refine the questionnaire and reference method, a pilot-study was performed in 105 self-reported users of watch-list medicines. Subsequently, the tool was validated under "real-life conditions" in 371 patients in 5 ambulatory and in-patient care sites ("sub-studies"). The physicians participating in the study scored the patients and classified their risk of harm as "unlikely" or "probable" (cut-off level: presence of ≥2 of 5 risk factors). They assessed all medical records retrospectively (independent reference method) to validate the risk classification and documented their perception of the tool's value. In 3 ambulatory care sites (180 patients), the tool correctly classified 5 patients as harmed by FFCm's. The positive and negative likelihood ratios (LR+/LR-) and the discrimination power were calculated for two cut-off levels: a) 1 site (50 patients): presence of two risk factors (at 10% estimated health care system contamination with FFCm's): LR + 4.9/LR-0, post-test probability: 35%; b) 2 sites (130 patients): presence of three risk factors (at 5% estimated prevalence of use of non-prescribed medicines (FFCm's) by certain risk groups): LR + 9.7/LR-0, post-test probability: 33%. In 2 in-patient care sites (191 patients), no patient was confirmed as harmed by FFCm's. The physicians perceived the tool as valuable for finding harm, and as an information source regarding risk factors. This "decision aid" is a systematic tool which helps find in medical practice patients harmed by FFCm's. This study supports its value in ambulatory care in regions with health care system contamination and in certain risk groups. The establishment of systematic communication between authorities and the medical community concerning FFCm's, current patterns of use and case reports may sustain positive public health impacts.

  16. Utilization of serology for the diagnosis of suspected Lyme borreliosis in Denmark: survey of patients seen in general practice.

    PubMed

    Dessau, Ram B; Bangsborg, Jette M; Ejlertsen, Tove; Skarphedinsson, Sigurdur; Schønheyder, Henrik C

    2010-11-01

    Serological testing for Lyme borreliosis (LB) is frequently requested by general practitioners for patients with a wide variety of symptoms. A survey was performed in order to characterize test utilization and clinical features of patients investigated for serum antibodies to Borrelia burgdorferi sensu lato. During one calendar year a questionnaire was sent to the general practitioners who had ordered LB serology from patients in three Danish counties (population 1.5 million inhabitants). Testing was done with a commercial ELISA assay with purified flagella antigen from a Danish strain of B. afzelii. A total of 4,664 patients were tested. The IgM and IgG seropositivity rates were 9.2% and 3.3%, respectively. Questionnaires from 2,643 (57%) patients were available for analysis. Erythema migrans (EM) was suspected in 38% of patients, Lyme arthritis/disseminated disease in 23% and early neuroborreliosis in 13%. Age 0-15 years and suspected EM were significant predictors of IgM seropositivity, whereas suspected acrodermatitis was a predictor of IgG seropositivity. LB was suspected in 646 patients with arthritis, but only 2.3% were IgG seropositive. This is comparable to the level of seropositivity in the background population indicating that Lyme arthritis is a rare entity in Denmark, and the low pretest probability should alert general practitioners to the possibility of false positive LB serology. Significant predictors for treating the patient were a reported tick bite and suspected EM. A detailed description of the utilization of serology for Lyme borreliosis with rates of seropositivity according to clinical symptoms is presented. Low rates of seropositivity in certain patient groups indicate a low pretest probability and there is a notable risk of false positive results. 38% of all patients tested were suspected of EM, although this is not a recommended indication due to a low sensitivity of serological testing.

  17. Utilization of serology for the diagnosis of suspected Lyme borreliosis in Denmark: Survey of patients seen in general practice

    PubMed Central

    2010-01-01

    Background Serological testing for Lyme borreliosis (LB) is frequently requested by general practitioners for patients with a wide variety of symptoms. Methods A survey was performed in order to characterize test utilization and clinical features of patients investigated for serum antibodies to Borrelia burgdorferi sensu lato. During one calendar year a questionnaire was sent to the general practitioners who had ordered LB serology from patients in three Danish counties (population 1.5 million inhabitants). Testing was done with a commercial ELISA assay with purified flagella antigen from a Danish strain of B. afzelii. Results A total of 4,664 patients were tested. The IgM and IgG seropositivity rates were 9.2% and 3.3%, respectively. Questionnaires from 2,643 (57%) patients were available for analysis. Erythema migrans (EM) was suspected in 38% of patients, Lyme arthritis/disseminated disease in 23% and early neuroborreliosis in 13%. Age 0-15 years and suspected EM were significant predictors of IgM seropositivity, whereas suspected acrodermatitis was a predictor of IgG seropositivity. LB was suspected in 646 patients with arthritis, but only 2.3% were IgG seropositive. This is comparable to the level of seropositivity in the background population indicating that Lyme arthritis is a rare entity in Denmark, and the low pretest probability should alert general practitioners to the possibility of false positive LB serology. Significant predictors for treating the patient were a reported tick bite and suspected EM. Conclusions A detailed description of the utilization of serology for Lyme borreliosis with rates of seropositivity according to clinical symptoms is presented. Low rates of seropositivity in certain patient groups indicate a low pretest probability and there is a notable risk of false positive results. 38% of all patients tested were suspected of EM, although this is not a recommended indication due to a low sensitivity of serological testing. PMID:21040576

  18. Detection of white matter lesions in cerebral small vessel disease

    NASA Astrophysics Data System (ADS)

    Riad, Medhat M.; Platel, Bram; de Leeuw, Frank-Erik; Karssemeijer, Nico

    2013-02-01

    White matter lesions (WML) are diffuse white matter abnormalities commonly found in older subjects and are important indicators of stroke, multiple sclerosis, dementia and other disorders. We present an automated WML detection method and evaluate it on a dataset of small vessel disease (SVD) patients. In early SVD, small WMLs are expected to be of importance for the prediction of disease progression. Commonly used WML segmentation methods tend to ignore small WMLs and are mostly validated on the basis of total lesion load or a Dice coefficient for all detected WMLs. Therefore, in this paper, we present a method that is designed to detect individual lesions, large or small, and we validate the detection performance of our system with FROC (free-response ROC) analysis. For the automated detection, we use supervised classification making use of multimodal voxel based features from different magnetic resonance imaging (MRI) sequences, including intensities, tissue probabilities, voxel locations and distances, neighborhood textures and others. After preprocessing, including co-registration, brain extraction, bias correction, intensity normalization, and nonlinear registration, ventricle segmentation is performed and features are calculated for each brain voxel. A gentle-boost classifier is trained using these features from 50 manually annotated subjects to give each voxel a probability of being a lesion voxel. We perform ROC analysis to illustrate the benefits of using additional features to the commonly used voxel intensities; significantly increasing the area under the curve (Az) from 0.81 to 0.96 (p<0.05). We perform the FROC analysis by testing our classifier on 50 previously unseen subjects and compare the results with manual annotations performed by two experts. Using the first annotator results as our reference, the second annotator performs at a sensitivity of 0.90 with an average of 41 false positives per subject while our automated method reached the same level of sensitivity at approximately 180 false positives per subject.

  19. Bone allograft banking in South Australia.

    PubMed

    Campbell, D G; Oakeshott, R D

    1995-12-01

    The South Australian Bone Bank had expanded to meet an increased demand for allograft bone. During a 5 year period from 1988 to 1992, 2361 allografts were harvested from 2146 living donors and 30 cadaveric donors. The allografts were screened by contemporary banking techniques which include a social history, donor serum tests for HIV-1, HIV-2, hepatitis B and C, syphilis serology, graft microbiology and histology. Grafts were irradiated with 25 kGy. The majority of grafts were used for arthroplasty or spinal surgery and 99 were used for tumour reconstruction. Of the donated grafts 336 were rejected by the bank. One donor was HIV-positive and two had false positive screens. There were seven donors with positive serology for hepatitis B, eight for hepatitis C and nine for syphilis. Twenty-seven grafts had positive cultures. Bone transplantation is the most frequent non-haematogenous allograft in South Australia and probably nationally. The low incidence of infectious viral disease in the donor population combined with an aggressive discard policy has ensured relative safety of the grafts. The frequency of graft rejection was similar to other bone banks but the incidence of HIV was lower.

  20. A Closer Look at Self-Reported Suicide Attempts: False Positives and False Negatives

    ERIC Educational Resources Information Center

    Ploderl, Martin; Kralovec, Karl; Yazdi, Kurosch; Fartacek, Reinhold

    2011-01-01

    The validity of self-reported suicide attempt information is undermined by false positives (e.g., incidences without intent to die), or by unreported suicide attempts, referred to as false negatives. In a sample of 1,385 Austrian adults, we explored the occurrence of false positives and false negatives with detailed, probing questions. Removing…

  1. A Clinically Meaningful Interpretation of the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) II and III Data.

    PubMed

    Cronin, Paul; Dwamena, Ben A

    2018-05-01

    This study aimed to calculate the multiple-level likelihood ratios (LRs) and posttest probabilities for a positive, indeterminate, or negative test result for multidetector computed tomography pulmonary angiography (MDCTPA) ± computed tomography venography (CTV) and magnetic resonance pulmonary angiography (MRPA) ± magnetic resonance venography (MRV) for each clinical probability level (two-, three-, and four-level) for the nine most commonly used clinical prediction rules (CPRs) (Wells, Geneva, Miniati, and Charlotte). The study design is a review of observational studies with critical review of multiple cohort studies. The settings are acute care, emergency room care, and ambulatory care (inpatients and outpatients). Data were used to estimate pulmonary embolism (PE) pretest probability for each of the most commonly used CPRs at each probability level. Multiple-level LRs (positive, indeterminate, negative test) were generated and used to calculate posttest probabilities for MDCTPA, MDCTPA + CTV, MRPA, and MRPA + MRV from sensitivity and specificity results from Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) II and PIOPED III for each clinical probability level for each CPR. Nomograms were also created. The LRs for a positive test result were higher for MRPA compared to MDCTPA without venography (76 vs 20) and with venography (42 vs 18). LRs for a negative test result were lower for MDCTPA compared to MRPA without venography (0.18 vs 0.22) and with venography (0.12 vs 0.15). In the three-level Wells score, the pretest clinical probability of PE for a low, moderate, and high clinical probability score is 5.7, 23, and 49. The posttest probability for an initially low clinical probability PE for a positive, indeterminate, and negative test result, respectively, for MDCTPA is 54, 5 and 1; for MDCTPA + CTV is 52, 2, and 0.7; for MRPA is 82, 6, and 1; and for MRPA + MRV is 72, 3, and 1; for an initially moderate clinical probability PE for MDCTPA is 86, 22, and 5; for MDCTPA + CTV is 85, 10, and 4; for MRPA is 96, 25, and 6; and for MRPA + MRV is 93, 14, and 4; and for an initially high clinical probability of PE for MDCTPA is 95, 47, and 15; for MDCTPA + CTV is 95, 27, and 10; for MRPA is 99, 52, and 17; and for MRPA + MRV is 98, 34, and 13. For a positive test result, LRs were considerably higher for MRPA compared to MDCTPA. However, both a positive MRPA and MDCTPA have LRs >10 and therefore can confirm the presence of PE. Performing venography reduced the LR for a positive and negative test for both MDCTPA and MRPA. The nomograms give posttest probabilities for a positive, indeterminate, or negative test result for MDCTPA and MRPA (with and without venography) for each clinical probability level for each of the CPR. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  2. Intersubject Differences in False Nonmatch Rates for a Fingerprint-Based Authentication System

    NASA Astrophysics Data System (ADS)

    Breebaart, Jeroen; Akkermans, Ton; Kelkboom, Emile

    2009-12-01

    The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as "goats due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for "goats, since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of "goats during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.

  3. Interpreting results of cluster surveys in emergency settings: is the LQAS test the best option?

    PubMed

    Bilukha, Oleg O; Blanton, Curtis

    2008-12-09

    Cluster surveys are commonly used in humanitarian emergencies to measure health and nutrition indicators. Deitchler et al. have proposed to use Lot Quality Assurance Sampling (LQAS) hypothesis testing in cluster surveys to classify the prevalence of global acute malnutrition as exceeding or not exceeding the pre-established thresholds. Field practitioners and decision-makers must clearly understand the meaning and implications of using this test in interpreting survey results to make programmatic decisions. We demonstrate that the LQAS test--as proposed by Deitchler et al.--is prone to producing false-positive results and thus is likely to suggest interventions in situations where interventions may not be needed. As an alternative, to provide more useful information for decision-making, we suggest reporting the probability of an indicator's exceeding the threshold as a direct measure of "risk". Such probability can be easily determined in field settings by using a simple spreadsheet calculator. The "risk" of exceeding the threshold can then be considered in the context of other aggravating and protective factors to make informed programmatic decisions.

  4. Interpreting results of cluster surveys in emergency settings: is the LQAS test the best option?

    PubMed Central

    Bilukha, Oleg O; Blanton, Curtis

    2008-01-01

    Cluster surveys are commonly used in humanitarian emergencies to measure health and nutrition indicators. Deitchler et al. have proposed to use Lot Quality Assurance Sampling (LQAS) hypothesis testing in cluster surveys to classify the prevalence of global acute malnutrition as exceeding or not exceeding the pre-established thresholds. Field practitioners and decision-makers must clearly understand the meaning and implications of using this test in interpreting survey results to make programmatic decisions. We demonstrate that the LQAS test–as proposed by Deitchler et al. – is prone to producing false-positive results and thus is likely to suggest interventions in situations where interventions may not be needed. As an alternative, to provide more useful information for decision-making, we suggest reporting the probability of an indicator's exceeding the threshold as a direct measure of "risk". Such probability can be easily determined in field settings by using a simple spreadsheet calculator. The "risk" of exceeding the threshold can then be considered in the context of other aggravating and protective factors to make informed programmatic decisions. PMID:19068120

  5. Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less

  6. Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems

    DOE PAGES

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2017-03-31

    This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less

  7. Evaluation of Correlation between Pretest Probability for Clostridium difficile Infection and Clostridium difficile Enzyme Immunoassay Results.

    PubMed

    Kwon, Jennie H; Reske, Kimberly A; Hink, Tiffany; Burnham, C A; Dubberke, Erik R

    2017-02-01

    The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. Copyright © 2017 American Society for Microbiology.

  8. [False positive serum des-gamma-carboxy prothrombin after resection of hepatocellular carcinoma].

    PubMed

    Hiramatsu, Kumiko; Tanaka, Yasuhito; Takagi, Kazumi; Iida, Takayasu; Takasaka, Yoshimitsu; Mizokami, Masashi

    2007-04-01

    Measurements of serum concentrations of des-gamma-carboxy-prothrombin (PIVKA-II) are widely used for diagnosing hepatocellular carcinoma (HCC). Recently, when we evaluated the correlation of PIVKA-II between two commercially available PIVKA-II immunoassay kits (Lumipulse f vs. Picolumi) to introduce it in our hospital, false high values of PIVKA-II were observed in Lumipulse assay. Four(4%) of 100 serum samples showed false high values, and all of them were obtained from patients less than 2 month after curative resection of HCC. Examining additional 7 patients with HCC resection, serum samples from the 5 patients had the same trend. To elucidate the non-specific reaction by Lumipulse assay which utilized alkaline phosphatase (ALP) enzymatic reaction, inhibition assays by various absorbents such as inactive ALP and IgM antibodies were performed. Excess of inactive ALP reduced the high values of PIVKA-II. Note that anti-bleeding sheets (fibrinogen combined drug), which included bovine thrombin, were directly attached on liver of all patients with HCC resection in this study. As the sheets also contaminate ALP and probably produce IgM antibodies to ALP, the IgM may cross-react with anti-PIVKA-II antibodies directly. Taken together, it was suggested that produced antibodies against ALP derived from anti-bleeding sheets led false high values of PIVKA-II in the patients with HCC resection.

  9. Deep belief networks for false alarm rejection in forward-looking ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Becker, John; Havens, Timothy C.; Pinar, Anthony; Schulz, Timothy J.

    2015-05-01

    Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener's false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.

  10. "False Positive" Claims of Near-Death Experiences and "False Negative" Denials of Near-Death Experiences

    ERIC Educational Resources Information Center

    Greyson, Bruce

    2005-01-01

    Some persons who claim to have had near-death experiences (NDEs) fail research criteria for having had NDEs ("false positives"); others who deny having had NDEs do meet research criteria for having had NDEs ("false negatives"). The author evaluated false positive claims and false negative denials in an organization that promotes near-death…

  11. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  12. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery Using a Probabilistic Learning Framework

    NASA Technical Reports Server (NTRS)

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna

    2015-01-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  13. Relative Accuracy of Nucleic Acid Amplification Tests and Culture in Detecting Chlamydia in Asymptomatic Men

    PubMed Central

    Cheng, Hong; Macaluso, Maurizio; Vermund, Sten H.; Hook, Edward W.

    2001-01-01

    Published estimates of the sensitivity and specificity of PCR and ligase chain reaction (LCR) for detecting Chlamydia trachomatis are potentially biased because of study design limitations (confirmation of test results was limited to subjects who were PCR or LCR positive but culture negative). Relative measures of test accuracy are less prone to bias in incomplete study designs. We estimated the relative sensitivity (RSN) and relative false-positive rate (RFP) for PCR and LCR versus cell culture among 1,138 asymptomatic men and evaluated the potential bias of RSN and RFP estimates. PCR and LCR testing in urine were compared to culture of urethral specimens. Discordant results (PCR or LCR positive, but culture negative) were confirmed by using a sequence including the other DNA amplification test, direct fluorescent antibody testing, and a DNA amplification test to detect chlamydial major outer membrane protein. The RSN estimates for PCR and LCR were 1.45 (95% confidence interval [CI] = 1.3 to 1.7) and 1.49 (95% CI = 1.3 to 1.7), respectively, indicating that both methods are more sensitive than culture. Very few false-positive results were found, indicating that the specificity levels of PCR, LCR, and culture are high. The potential bias in RSN and RFP estimates were <5 and <20%, respectively. The estimation of bias is based on the most likely and probably conservative parameter settings. If the sensitivity of culture is between 60 and 65%, then the true sensitivity of PCR and LCR is between 90 and 97%. Our findings indicate that PCR and LCR are significantly more sensitive than culture, while the three tests have similar specificities. PMID:11682509

  14. Validity of combined cytology and human papillomavirus (HPV) genotyping with adjuvant DNA-cytometry in routine cervical screening: results from 31031 women from the Bonn-region in West Germany.

    PubMed

    Bollmann, Reinhard; Bankfalvi, Agnes; Griefingholt, Harald; Trosic, Ante; Speich, Norbert; Schmitt, Christoph; Bollmann, Magdolna

    2005-05-01

    Our aim was to improve the accuracy of routine cervical screening by a risk-adapted multimodal protocol with special focus on possible reduction and prognostic assessment of false positive results. A cohort of 31031 women from the Bonn-region in West Germany, median age 36 years, were screened by cytology (conventional or liquid-based), followed by PCR-based HVP detection with genotyping and adjuvant DNA image cytometry, if indicated, in a sequential manner. The true prevalence of high-grade cervical intraepithelial neoplasia and carcinoma (>/=CIN2) was 0.32% in the population as projected from cervical biopsies of 123 women (0.4%), of whom 100 showed >/=CIN2. Sensitivity of the cytology screening program at PapIIID/HSIL threshold for detecting histologically confirmed >/=CIN2 cases was 81%, with specificity, positive predictive value (PPV) and negative predictive value (NPV) of 99, 20.9 and 99.9%, respectively. Of 38 women receiving the complete screening protocol, all the 31 >/=CIN2 cases were correctly detected by cytology alone, 30 by positive high-risk HPV genotype and 30 by aneuploid DNA profile. The combination of the three methods resulted in an up to 6.9% increase in PPV for >/=CIN2 at practically unchanged detection rate with the additional benefit of being able to predict the probable outcome of CIN1 lesions detected as false positives with any single test. Multimodal cervical screening might permit identification of those women with low-grade squamous intraepithelial lesions likely to progress at an earlier and curable stage of disease and lengthen the screening interval in those with transient minor lesions caused by productive HPV infection.

  15. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery using a Probabilistic Learning Framework

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.

    2015-12-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  16. Risk of breast cancer after false-positive results in mammographic screening.

    PubMed

    Román, Marta; Castells, Xavier; Hofvind, Solveig; von Euler-Chelpin, My

    2016-06-01

    Women with false-positive results are commonly referred back to routine screening. Questions remain regarding their long-term outcome of breast cancer. We assessed the risk of screen-detected breast cancer in women with false-positive results. We conducted a joint analysis using individual level data from the population-based screening programs in Copenhagen and Funen in Denmark, Norway, and Spain. Overall, 150,383 screened women from Denmark (1991-2008), 612,138 from Norway (1996-2010), and 1,172,572 from Spain (1990-2006) were included. Poisson regression was used to estimate the relative risk (RR) of screen-detected cancer for women with false-positive versus negative results. We analyzed information from 1,935,093 women 50-69 years who underwent 6,094,515 screening exams. During an average 5.8 years of follow-up, 230,609 (11.9%) women received a false-positive result and 27,849 (1.4%) were diagnosed with screen-detected cancer. The adjusted RR of screen-detected cancer after a false-positive result was 2.01 (95% CI: 1.93-2.09). Women who tested false-positive at first screen had a RR of 1.86 (95% CI: 1.77-1.96), whereas those who tested false-positive at third screening had a RR of 2.42 (95% CI: 2.21-2.64). The RR of breast cancer at the screening test after the false-positive result was 3.95 (95% CI: 3.71-4.21), whereas it decreased to 1.25 (95% CI: 1.17-1.34) three or more screens after the false-positive result. Women with false-positive results had a twofold risk of screen-detected breast cancer compared to women with negative tests. The risk remained significantly higher three or more screens after the false-positive result. The increased risk should be considered when discussing stratified screening strategies. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  17. US women's attitudes to false positive mammography results and detection of ductal carcinoma in situ: cross sectional survey

    PubMed Central

    Schwartz, Lisa M; Woloshin, Steven; Sox, Harold C; Fischhoff, Baruch; Welch, H Gilbert

    2000-01-01

    Objective To determine women's attitudes to and knowledge of both false positive mammography results and the detection of ductal carcinoma in situ after screening mammography. Design Cross sectional survey. Setting United States. Participants 479 women aged 18-97 years who did not report a history of breast cancer. Main outcome measures Attitudes to and knowledge of false positive results and the detection of ductal carcinoma in situ after screening mammography. Results Women were aware that false positive results do occur. Their median estimate of the false positive rate for 10 years of annual screening was 20% (25th percentile estimate, 10%; 75th percentile estimate, 45%). The women were highly tolerant of false positives: 63% thought that 500 or more false positives per life saved was reasonable and 37% would tolerate 10 000 or more. Women who had had a false positive result (n=76) expressed the same high tolerance: 39% would tolerate 10 000 or more false positives. 62% of women did not want to take false positive results into account when deciding about screening. Only 8% of women thought that mammography could harm a woman without breast cancer, and 94% doubted the possibility of non-progressive breast cancers. Few had heard about ductal carcinoma in situ, a cancer that may not progress, but when informed, 60% of women wanted to take into account the possibility of it being detected when deciding about screening. Conclusions Women are aware of false positives and seem to view them as an acceptable consequence of screening mammography. In contrast, most women are unaware that screening can detect cancers that may never progress but feel that such information would be relevant. Education should perhaps focus less on false positives and more on the less familiar outcome of detection of ductal carcinoma in situ. PMID:10856064

  18. A Lyme borreliosis diagnosis probability score - no relation with antibiotic treatment response.

    PubMed

    Briciu, Violeta T; Flonta, Mirela; Leucuţa, Daniel; Cârstina, Dumitru; Ţăţulescu, Doina F; Lupşe, Mihaela

    2017-05-01

    (1) To describe epidemiological and clinical data of patients that present with the suspicion of Lyme borreliosis (LB); (2) to evaluate a previous published score that classifies patients on the probability of having LB, following-up patients' clinical outcome after antibiotherapy. Inclusion criteria: patients with clinical manifestations compatible with LB and Borrelia (B.) burgdorferi positive serology, hospitalized in a Romanian hospital between January 2011 and October 2012. erythema migrans (EM) or suspicion of Lyme neuroborreliosis (LNB) with lumbar puncture performed for diagnosis. A questionnaire was completed for each patient regarding associated diseases, tick bites or EM history and clinical signs/symptoms at admission, end of treatment and 3 months later. Two-tier testing (TTT) used an ELISA followed by a Western Blot kit. The patients were classified in groups, using the LB probability score and were evaluated in a multidisciplinary team. Antibiotherapy followed guidelines' recommendations. Sixty-four patients were included, presenting diverse associated comorbidities. Fifty-seven patients presented positive TTT, seven presenting either ELISA or Western Blot test positive. No differences in outcome were found between the groups of patients classified as very probable, probable and little probable LB. Instead, a better post-treatment outcome was described in patients with positive TTT. The patients investigated for the suspicion of LB present diverse clinical manifestations and comorbidities that complicate differential diagnosis. The LB diagnosis probability score used in our patients did not correlate with the antibiotic treatment response, suggesting that the probability score does not bring any benefit in diagnosis.

  19. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  20. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  1. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    PubMed

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  2. Multistate and multihypothesis discrimination with open quantum systems

    NASA Astrophysics Data System (ADS)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2018-05-01

    We show how an upper bound for the ability to discriminate any number N of candidates for the Hamiltonian governing the evolution of an open quantum system may be calculated by numerically efficient means. Our method applies an effective master-equation analysis to evaluate the pairwise overlaps between candidate full states of the system and its environment pertaining to the Hamiltonians. These overlaps are then used to construct an N -dimensional representation of the states. The optimal positive-operator valued measure (POVM) and the corresponding probability of assigning a false hypothesis may subsequently be evaluated by phrasing optimal discrimination of multiple nonorthogonal quantum states as a semidefinite programming problem. We provide three realistic examples of multihypothesis testing with open quantum systems.

  3. Cumulative detection probabilities and range accuracy of a pulsed Geiger-mode avalanche photodiode laser ranging system

    NASA Astrophysics Data System (ADS)

    Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan

    2017-10-01

    Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.

  4. Characterisation of false-positive observations in botanical surveys

    PubMed Central

    2017-01-01

    Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972

  5. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  6. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  7. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  8. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  9. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...

  10. 49 CFR 190.205 - Warning letters.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Warning letters. 190.205 Section 190.205... PROCEDURES Enforcement § 190.205 Warning letters. Upon determining that a probable violation of 49 U.S.C..., OPS, may issue a Warning Letter notifying the owner or operator of the probable violation and advising...

  11. 49 CFR 190.205 - Warning letters.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Warning letters. 190.205 Section 190.205... PROCEDURES Enforcement § 190.205 Warning letters. Upon determining that a probable violation of 49 U.S.C..., OPS, may issue a Warning Letter notifying the owner or operator of the probable violation and advising...

  12. 49 CFR 190.205 - Warning letters.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Warning letters. 190.205 Section 190.205... PROCEDURES Enforcement § 190.205 Warning letters. Upon determining that a probable violation of 49 U.S.C..., OPS, may issue a Warning Letter notifying the owner or operator of the probable violation and advising...

  13. Using electronic data to predict the probability of true bacteremia from positive blood cultures.

    PubMed

    Wang, S J; Kuperman, G J; Ohno-Machado, L; Onderdonk, A; Sandige, H; Bates, D W

    2000-01-01

    As part of a project to help physicians make more appropriate treatment decisions, we implemented a clinical prediction rule that computes the probability of true bacteremia for positive blood cultures and displays this information when culture results are viewed online. Prior to implementing the rule, we performed a revalidation study to verify the accuracy of the previously published logistic regression model. We randomly selected 114 cases of positive blood cultures from a recent one-year period and performed a paper chart review with the help of infectious disease experts to determine whether the cultures were true positives or contaminants. Based on the results of this revalidation study, we updated the probabilities reported by the model and made additional enhancements to improve the accuracy of the rule. Next, we implemented the rule into our hospital's laboratory computer system so that the probability information was displayed with all positive blood culture results. We displayed the prediction rule information on approximately half of the 2184 positive blood cultures at our hospital that were randomly selected during a 6-month period. During the study, we surveyed 54 housestaff to obtain their opinions about the usefulness of this intervention. Fifty percent (27/54) indicated that the information had influenced their belief of the probability of bacteremia in their patients, and in 28% (15/54) of cases it changed their treatment decision. Almost all (98% (53/54)) indicated that they wanted to continue receiving this information. We conclude that the probability information provided by this clinical prediction rule is considered useful to physicians when making treatment decisions.

  14. A Case–control and a family-based association study revealing an association between CYP2E1 polymorphisms and nasopharyngeal carcinoma risk in Cantonese

    PubMed Central

    Jia, Wei-Hua; Pan, Qing-Hua; Qin, Hai-De; Xu, Ya-Fei; Shen, Guo-Ping; Chen, Lina; Chen, Li-Zhen; Feng, Qi-Sheng; Hong, Ming-Huang; Zeng, Yi-Xin; Shugart, Yin Yao

    2009-01-01

    Nasopharyngeal carcinoma (NPC) is rare in most parts of the world but is more prevalent in Southern China, especially in Guangdong. The cytochrome P450 2E1 (CYP2E1) has been recognized as one of the critically important enzymes involved in oxidizing carcinogens and is probably to be associated with NPC carcinogenesis. To systematically investigate the association between genetic variants in CYP2E1 and NPC risk in Cantonese, two independent studies, a family-based association study and a case–control study, were conducted using the haplotype-tagging single-nucleotide polymorphism approach. A total of 2499 individuals from 546 nuclear families were initially genotyped for the family-based association study. Single-nucleotide polymorphisms (SNPs) rs9418990, rs915908, rs8192780, rs1536826, rs3827688 and one haplotype h2 (CGTGTTAA) were revealed to be significantly associated with the NPC phenotype (P = 0.045–0.003 and P = 0.003, respectively). To follow up the initial study, a case–control study including 755 cases and 755 controls was conducted. Similar results were observed in the case–control study in individuals <46 years of age and had a history of cigarette smoking, with odds ratios (ORs) of specific genotypes ranging from 1.88 to 2.99 corresponding to SNP rs9418990, rs3813865, rs915906, rs2249695, rs8192780, rs1536826, rs3827688 and of haplotypes h2 with OR = 1.65 (P = 0.026), h5 (CCCGTTAA) with OR = 2.58 (P = 0.007). The values of false-positive report probability were <0.015 for six SNPs, suggesting that the reported associations are less probably to be false. This study provides robust evidence for associations between genetic variants of CYP2E1 and NPC risk. PMID:19805575

  15. Kepler Certified False Positive Table

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Batalha, Natalie Marie; Colon, Knicole Dawn; Coughlin, Jeffrey Langer; Haas, Michael R.; Henze, Chris; Huber, Daniel; Morton, Tim; Rowe, Jason Frank; Mullally, Susan Elizabeth; hide

    2017-01-01

    This document describes the Kepler Certied False Positive table hosted at the Exoplanet Archive1, herein referred to as the CFP table. This table is the result of detailed examination by the Kepler False Positive Working Group (FPWG) of declared false positives in the Kepler Object of Interest (KOI) tables (see, for example, Batalha et al. (2012); Burke et al.(2014); Rowe et al. (2015); Mullally et al. (2015); Coughlin et al. (2015b)) at the Exoplanet Archive. A KOI is considered a false positive if it is not due to a planet orbiting the KOI's target star. The CFP table contains all KOIs in the Exoplanet Archive cumulative KOI table. The purpose of the CFP table is to provide a list of certified false positive KOIs. A KOI is certified as a false positive when, in the judgement of the FPWG, there is no plausible planetary interpretation of the observational evidence, which we summarize by saying that the evidence for a false positive is compelling. This certification process involves detailed examination using all available data for each KOI, establishing a high-reliability ground truth set. The CFP table can be used to estimate the reliability of, for example, the KOI tables which are created using only Kepler photometric data, so the disposition of individual KOIs may differ in the KOI and CFP tables. Follow-up observers may find the CFP table useful to avoid observing false positives.

  16. A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics

    DTIC Science & Technology

    2007-05-01

    findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each

  17. Detecting false positives in multielement designs: implications for brief assessments.

    PubMed

    Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L

    2011-11-01

    The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.

  18. Response time as a discriminator between true- and false-positive responses in suprathreshold perimetry.

    PubMed

    Artes, Paul H; McLeod, David; Henson, David B

    2002-01-01

    To report on differences between the latency distributions of responses to stimuli and to false-positive catch trials in suprathreshold perimetry. To describe an algorithm for defining response time windows and to report on its performance in discriminating between true- and false-positive responses on the basis of response time (RT). A sample of 435 largely inexperienced patients underwent suprathreshold visual field examination on a perimeter that was modified to record RTs. Data were analyzed from 60,500 responses to suprathreshold stimuli and from 523 false-positive responses to catch trials. False-positive responses had much more variable latencies than responses to suprathreshold stimuli. An algorithm defining RT windows on the basis of z-transformed individual latency samples correctly identified more than 70% of false-positive responses to catch trials, whereas fewer than 3% of responses to suprathreshold stimuli were classified as false-positive responses. Latency analysis can be used to detect a substantial proportion of false-positive responses in suprathreshold perimetry. Rejection of such responses may increase the reliability of visual field screening by reducing variability and bias in a small but clinically important proportion of patients.

  19. Detecting and avoiding likely false-positive findings - a practical guide.

    PubMed

    Forstmeier, Wolfgang; Wagenmakers, Eric-Jan; Parker, Timothy H

    2017-11-01

    Recently there has been a growing concern that many published research findings do not hold up in attempts to replicate them. We argue that this problem may originate from a culture of 'you can publish if you found a significant effect'. This culture creates a systematic bias against the null hypothesis which renders meta-analyses questionable and may even lead to a situation where hypotheses become difficult to falsify. In order to pinpoint the sources of error and possible solutions, we review current scientific practices with regard to their effect on the probability of drawing a false-positive conclusion. We explain why the proportion of published false-positive findings is expected to increase with (i) decreasing sample size, (ii) increasing pursuit of novelty, (iii) various forms of multiple testing and researcher flexibility, and (iv) incorrect P-values, especially due to unaccounted pseudoreplication, i.e. the non-independence of data points (clustered data). We provide examples showing how statistical pitfalls and psychological traps lead to conclusions that are biased and unreliable, and we show how these mistakes can be avoided. Ultimately, we hope to contribute to a culture of 'you can publish if your study is rigorous'. To this end, we highlight promising strategies towards making science more objective. Specifically, we enthusiastically encourage scientists to preregister their studies (including a priori hypotheses and complete analysis plans), to blind observers to treatment groups during data collection and analysis, and unconditionally to report all results. Also, we advocate reallocating some efforts away from seeking novelty and discovery and towards replicating important research findings of one's own and of others for the benefit of the scientific community as a whole. We believe these efforts will be aided by a shift in evaluation criteria away from the current system which values metrics of 'impact' almost exclusively and towards a system which explicitly values indices of scientific rigour. © 2016 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.

  20. Prescription drugs associated with false-positive results when using faecal immunochemical tests for colorectal cancer screening.

    PubMed

    Ibáñez-Sanz, Gemma; Garcia, Montse; Rodríguez-Moranta, Francisco; Binefa, Gemma; Gómez-Matas, Javier; Domènech, Xènia; Vidal, Carmen; Soriano, Antonio; Moreno, Víctor

    2016-10-01

    The most common side effect in population screening programmes is a false-positive result which leads to unnecessary risks and costs. To identify factors associated with false-positive results in a colorectal cancer screening programme with the faecal immunochemical test (FIT). Cross-sectional study of 472 participants with a positive FIT who underwent colonoscopy for confirmation of diagnosis between 2013 and 2014. A false-positive result was defined as having a positive FIT (≥20μg haemoglobin per gram of faeces) and follow-up colonoscopy without intermediate/high-risk lesions or cancer. Women showed a two-fold increased likelihood of a false-positive result compared with men (adjusted OR, 2.3; 95%CI, 1.5-3.4), but no female-specific factor was identified. The other variables associated with a false-positive result were successive screening (adjusted OR, 1.5; 95%CI, 1.0-2.2), anal disorders (adjusted OR, 3.1; 95%CI, 2.1-4.5) and the use of proton pump inhibitors (adjusted OR, 1.8; 95%CI, 1.1-2.9). Successive screening and proton pump inhibitor use were associated with FP in men. None of the other drugs were related to a false-positive FIT. Concurrent use of proton pump inhibitors at the time of FIT might increase the likelihood of a false-positive result. Further investigation is needed to determine whether discontinuing them could decrease the false-positive rate. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  1. Bayesian analysis of two diagnostic methods for paediatric ringworm infections in a teaching hospital.

    PubMed

    Rath, S; Panda, M; Sahu, M C; Padhy, R N

    2015-09-01

    Quantitatively, conventional methods of diagnosis of tinea capitis or paediatric ringworm, microscopic and culture tests were evaluated with Bayes rule. This analysis would help in quantifying the pervasive errors in each diagnostic method, particularly the microscopic method, as a long-term treatment would be involved to eradicate the infection by the use of a particular antifungal chemotherapy. Secondly, the analysis of clinical data would help in obtaining digitally the fallible standard of the microscopic test method, as the culture test method is taken as gold standard. Test results of 51 paediatric patients were of 4 categories: 21 samples were true positive (both tests positive), and 13 were true negative; the rest samples comprised both 14 false positive (microscopic test positivity with culture test negativity) and 3 false negative (microscopic test negativity with culture test positivity) samples. The prevalence of tinea infection was 47.01% in the population of 51 children. The microscopic test of a sample was efficient by 87.5%, in arriving at a positive result on diagnosis, when its culture test was positive; and, this test was efficient by 76.4%, in arriving at a negative result, when its culture test was negative. But, the post-test probability value of a sample with both microscopic and culture tests would be correct in distinguishing a sample from a sick or a healthy child with a chance of 71.5%. However, since the sensitivity of the analysis is 87.5%, the microscopic test positivity would be easier to detect in the presence of infection. In conclusion, it could be stated that Trychophyton rubrum was the most prevalent species; sensitivity and specificity of treating the infection, by antifungal therapy before ascertaining by the culture method remain as 0.8751 and 0.7642, respectively. A correct/coveted diagnostic method of fungal infection would be could be achieved by modern molecular methods (matrix-assisted laser desorption ionisation-time of flight mass spectrometry or fluorescence in situ hybridization or enzyme-linked immunosorbent assay [ELISA] or restriction fragment length polymorphism or DNA/RNA probes of known fungal taxa) in advanced laboratories. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  2. False negative rates in Drosophila cell-based RNAi screens: a case study

    PubMed Central

    2011-01-01

    Background High-throughput screening using RNAi is a powerful gene discovery method but is often complicated by false positive and false negative results. Whereas false positive results associated with RNAi reagents has been a matter of extensive study, the issue of false negatives has received less attention. Results We performed a meta-analysis of several genome-wide, cell-based Drosophila RNAi screens, together with a more focused RNAi screen, and conclude that the rate of false negative results is at least 8%. Further, we demonstrate how knowledge of the cell transcriptome can be used to resolve ambiguous results and how the number of false negative results can be reduced by using multiple, independently-tested RNAi reagents per gene. Conclusions RNAi reagents that target the same gene do not always yield consistent results due to false positives and weak or ineffective reagents. False positive results can be partially minimized by filtering with transcriptome data. RNAi libraries with multiple reagents per gene also reduce false positive and false negative outcomes when inconsistent results are disambiguated carefully. PMID:21251254

  3. Faecal coliform bacteria in Febros river (northwest Portugal): temporal variation, correlation with water parameters, and species identification.

    PubMed

    Cabral, João Paulo; Marques, Cristina

    2006-07-01

    Febros river water was sampled weekly, during 35 successive weeks, and analyzed for microbiological (total coliforms, faecal coliforms, faecal streptococci and enterococci) and chemical-physical (ammonia and temperature) parameters. All microbiological parameters were highly correlated with each other and with ammonia, suggesting that the simultaneous determination of all variables currently in use in the evaluation of the microbiological quality of waters is probably redundant, and could be simplified, and that ammonia should be tested as a sentinel parameter of the microbiological pollution load of Febros river. From the strains isolated from positive tubes of the faecal coliforms test (multiple tube fermentation technique) and retested in this assay, Escherichia coli, Klebsiella oxytoca and Klebsiella pneumoniae subsp. pneumoniae strains were positive, indicating that the faecal coliforms test is not totally specific for Escherichia coli, and can detect other bacteria. Considering that these Klebsiella spp. are not necessarily of faecal origin, it was concluded that the faecal coliforms test can overestimate true faecal pollution. From the strains isolated from positive tubes of the faecal coliforms procedure, only Escherichia coli strains were clearly positive in the beta-D-glucuronidase test. All other species were negative or very weakly positive, suggesting that the assay of the beta-D-glucuronidase activity is less prone to false positives than the faecal coliforms test in the quantification of Escherichia coli in environmental waters.

  4. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Public health consequences of a false-positive laboratory test result for Brucella--Florida, Georgia, and Michigan, 2005.

    PubMed

    2008-06-06

    Human brucellosis, a nationally notifiable disease, is uncommon in the United States. Most human cases have occurred in returned travelers or immigrants from regions where brucellosis is endemic, or were acquired domestically from eating illegally imported, unpasteurized fresh cheeses. In January 2005, a woman aged 35 years who lived in Nassau County, Florida, received a diagnosis of brucellosis, based on results of a Brucella immunoglobulin M (IgM) enzyme immunoassay (EIA) performed in a commercial laboratory using analyte specific reagents (ASRs); this diagnosis prompted an investigation of dairy products in two other states. Subsequent confirmatory antibody testing by Brucella microagglutination test (BMAT) performed at CDC on the patient's serum was negative. The case did not meet the CDC/Council of State and Territorial Epidemiologists' (CSTE) definition for a probable or confirmed brucellosis case, and the initial EIA result was determined to be a false positive. This report summarizes the case history, laboratory findings, and public health investigations. CDC recommends that Brucella serology testing only be performed using tests cleared or approved by the Food and Drug Administration (FDA) or validated under the Clinical Laboratory Improvement Amendments (CLIA) and shown to reliably detect the presence of Brucella infection. Results from these tests should be considered supportive evidence for recent infection only and interpreted in the context of a clinically compatible illness and exposure history. EIA is not considered a confirmatory Brucella antibody test; positive screening test results should be confirmed by Brucella-specific agglutination (i.e., BMAT or standard tube agglutination test) methods.

  6. Evaluation of single-nucleotide polymorphisms as internal controls in prenatal diagnosis of fetal blood groups.

    PubMed

    Doescher, Andrea; Petershofen, Eduard K; Wagner, Franz F; Schunter, Markus; Müller, Thomas H

    2013-02-01

    Determination of fetal blood groups in maternal plasma samples critically depends on adequate amplification of fetal DNA. We evaluated the routine inclusion of 52 single-nucleotide polymorphisms (SNPs) as internal reference in our polymerase chain reaction (PCR) settings to obtain a positive internal control for fetal DNA. DNA from 223 plasma samples of pregnant women was screened for RHD Exons 3, 4, 5, and 7 in a multiplex PCR including 52 SNPs divided into four primer pools. Amplicons were analyzed by single-base extension and the GeneScan method in a genetic analyzer. Results of D screening were compared to standard RHD genotyping of amniotic fluid or real-time PCR of fetal DNA from maternal plasma. The vast majority of all samples (97.8%) demonstrated differences in maternal and fetal SNP patterns when tested with four primer pools. These differences were not observed in less than 2.2% of the samples most probably due to an extraction failure for adequate amounts of fetal DNA. Comparison of the fetal genotypes with independent results did not reveal a single false-negative case among samples (n = 42) with positive internal control and negative fetal RHD typing. Coamplification of 52 SNPs with RHD-specific sequences for fetal blood group determination introduces a valid positive control for the amplification of fetal DNA to avoid false-negative results. This new approach does not require a paternal blood sample. It may also be applicable to other assays for fetal genotyping in maternal blood samples. © 2012 American Association of Blood Banks.

  7. [Roaming through methodology. XXXII. False test results].

    PubMed

    van der Weijden, T; van den Akker, M

    2001-05-12

    The number of requests for diagnostic tests is rising. This leads to a higher chance of false test results. The false-negative proportion of a test is the proportion of negative test results among the diseased subjects. The false-positive proportion is the proportion of positive test results among the healthy subjects. The calculation of the false-positive proportion is often incorrect. For example, instead of 1 minus the specificity it is calculated as 1 minus the positive predictive value. This can lead to incorrect decision-making with respect to the application of the test. Physicians must apply diagnostic tests in such a way that the risk of false test results is minimal. The patient should be aware that a perfectly conclusive diagnostic test is rare in medical practice, and should more often be informed of the implications of false-positive and false-negative test results.

  8. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  9. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  10. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Astrophysics Data System (ADS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; Grillmair, Carl J.; Jackson, Ed; Barlow, Tom; Yan, Lin; Cao, Yi; Cenko, S. Bradley; Storrie-Lombardi, Lisa J.; Helou, George; Prince, Thomas A.; Kulkarni, Shrinivas R.

    2017-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by ≃10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ≃97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  11. High-sensitivity high-selectivity detection of CWAs and TICs using tunable laser photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Pushkarsky, Michael; Webber, Michael; Patel, C. Kumar N.

    2005-03-01

    We provide a general technique for evaluating the performance of an optical sensor for the detection of chemical warfare agents (CWAs) in realistic environments and present data from a simulation model based on a field deployed discretely tunable 13CO2 laser photoacoustic spectrometer (L-PAS). Results of our calculations show the sensor performance in terms of usable sensor sensitivity as a function of probability of false positives (PFP). The false positives arise from the presence of many other gases in the ambient air that could be interferents. Using the L-PAS as it exists today, we can achieve a detection threshold of about 4 ppb for the CWAs while maintaining a PFP of less than 1:106. Our simulation permits us to vary a number of parameters in the model to provide guidance for performance improvement. We find that by using a larger density of laser lines (such as those obtained through the use of tunable semiconductor lasers), improving the detector noise and maintaining the accuracy of laser frequency determination, optical detection schemes can make possible CWA sensors having sub-ppb detection capability with <1:108 PFP. We also describe the results of a preliminary experiment that verifies the results of the simulation model. Finally, we discuss the use of continuously tunable quantum cascade lasers in L-PAS for CWA and TIC detection.

  12. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Technical Reports Server (NTRS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; hide

    2016-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by approximately equal to 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of approximately equal to 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  13. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives.

    PubMed

    Wolf, Max; Kurvers, Ralf H J M; Ward, Ashley J W; Krause, Stefan; Krause, Jens

    2013-04-07

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications.

  14. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives

    PubMed Central

    Wolf, Max; Kurvers, Ralf H. J. M.; Ward, Ashley J. W.; Krause, Stefan; Krause, Jens

    2013-01-01

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications. PMID:23407830

  15. Bayesian analysis and classification of two Enzyme-Linked Immunosorbent Assay (ELISA) tests without a gold standard

    PubMed Central

    Zhang, Jingyang; Chaloner, Kathryn; McLinden, James H.; Stapleton, Jack T.

    2013-01-01

    Reconciling two quantitative ELISA tests for an antibody to an RNA virus, in a situation without a gold standard and where false negatives may occur, is the motivation for this work. False negatives occur when access of the antibody to the binding site is blocked. Based on the mechanism of the assay, a mixture of four bivariate normal distributions is proposed with the mixture probabilities depending on a two-stage latent variable model including the prevalence of the antibody in the population and the probabilities of blocking on each test. There is prior information on the prevalence of the antibody, and also on the probability of false negatives, and so a Bayesian analysis is used. The dependence between the two tests is modeled to be consistent with the biological mechanism. Bayesian decision theory is utilized for classification. The proposed method is applied to the motivating data set to classify the data into two groups: those with and those without the antibody. Simulation studies describe the properties of the estimation and the classification. Sensitivity to the choice of the prior distribution is also addressed by simulation. The same model with two levels of latent variables is applicable in other testing procedures such as quantitative polymerase chain reaction tests where false negatives occur when there is a mutation in the primer sequence. PMID:23592433

  16. Categorizing mistaken false positives in regulation of human and environmental health.

    PubMed

    Hansen, Steffen Foss; Krayer von Krauss, Martin P; Tickner, Joel A

    2007-02-01

    One of the concerns often voiced by critics of the precautionary principle is that a widespread regulatory application of the principle will lead to a large number of false positives (i.e., over-regulation of minor risks and regulation of nonexisting risks). The present article proposes a general definition of a regulatory false positive, and seeks to identify case studies that can be considered authentic regulatory false positives. Through a comprehensive review of the science policy literature for proclaimed false positives and interviews with authorities on regulation and the precautionary principle we identified 88 cases. Following a detailed analysis of these cases, we found that few of the cases mentioned in the literature can be considered to be authentic false positives. As a result, we have developed a number of different categories for these cases of "mistaken false positives," including: real risks, "The jury is still out," nonregulated proclaimed risks, "Too narrow a definition of risk," and risk-risk tradeoffs. These categories are defined and examples are presented in order to illustrate their key characteristics. On the basis of our analysis, we were able to identify only four cases that could be defined as regulatory false positives in the light of today's knowledge and recognized uncertainty: the Southern Corn Leaf Blight, the Swine Flu, Saccharin, and Food Irradiation in relation to consumer health. We conclude that concerns about false positives do not represent a reasonable argument against future application of the precautionary principle.

  17. Conspicuity of renal calculi at unenhanced CT: effects of calculus composition and size and CT technique.

    PubMed

    Tublin, Mitchell E; Murphy, Michael E; Delong, David M; Tessler, Franklin N; Kliewer, Mark A

    2002-10-01

    To determine the effects of calculus size, composition, and technique (kilovolt and milliampere settings) on the conspicuity of renal calculi at unenhanced helical computed tomography (CT). The authors performed unenhanced CT of a phantom containing 188 renal calculi of varying size and chemical composition (brushite, cystine, struvite, weddellite, whewellite, and uric acid) at 24 combinations of four kilovolt (80-140 kV) and six milliampere (200-300 mA) levels. Two radiologists, who were unaware of the location and number of calculi, reviewed the CT images and recorded where stones were detected. These observations were compared with the known positions of calculi to generate true-positive and false-positive rates. Logistic regression analysis was performed to investigate the effects of stone size, composition, and technique and to generate probability estimates of detection. Interobserver agreement was estimated with kappa statistics. Interobserver agreement was high: the mean kappa value for the two observers was 0.86. The conspicuity of stone fragments increased with increasing kilovolt and milliampere levels for all stone types. At the highest settings (140 kV and 300 mA), the detection threshold size (ie, the size of calculus that had a 50% probability of being detected) ranged from 0.81 mm + 0.03 (weddellite) to 1.3 mm + 0.1 (uric acid). Detection threshold size for each type of calculus increased up to 1.17-fold at lower kilovolt settings and up to 1.08-fold at lower milliampere settings. The conspicuity of small renal calculi at CT increases with higher kilovolt and milliampere settings, with higher kilovolts being particularly important. Small uric acid calculi may be imperceptible, even with maximal CT technique.

  18. Novel Interpretation of Molecular Diagnosis of Congenital Toxoplasmosis According to Gestational Age at the Time of Maternal Infection

    PubMed Central

    Sterkers, Yvon; Pratlong, Francine; Albaba, Sahar; Loubersac, Julie; Picot, Marie-Christine; Pretet, Vanessa; Issert, Eric; Boulot, Pierre

    2012-01-01

    From a prospective cohort of 344 women who seroconverted for toxoplasmosis during pregnancy, 344 amniotic fluid, 264 placenta, and 216 cord blood samples were tested for diagnosis of congenital toxoplasmosis using the same PCR assay. The sensitivity and negative predictive value of the PCR assay using amniotic fluid were 86.3% and 97.2%, respectively, and both specificity and positive predictive value were 100%. Using placenta and cord blood, sensitivities were 79.5% and 21.2%, and specificities were 92% and 100%, respectively. In addition, the calculation of pretest and posttest probabilities and the use of logistic regression allowed us to obtain curves that give a dynamic interpretation of the risk of congenital toxoplasmosis according to gestational age at maternal infection, as represented by the three sample types (amniotic fluid, placenta, and cord blood). Two examples are cited here: for a maternal infection at 25 weeks of amenorrhea, a negative result of prenatal diagnosis allowed estimation of the probability of congenital toxoplasmosis at 5% instead of an a priori (pretest) risk estimate of 33%. For an infection at 10 weeks of amenorrhea associated with a pretest congenital toxoplasmosis risk of 7%, a positive PCR result using placenta at birth yields a risk increase to 43%, while a negative result damps down the risk to 0.02%. Thus, with a molecular diagnosis performing at a high level, and in spite of the persistence of false negatives, posttest risk curves using both negative and positive results prove highly informative, allowing a better assessment of the actual risk of congenital toxoplasmosis and finally an improved decision guide to treatment. PMID:23035201

  19. An analysis of false positive reactions occurring with the Captia Syph G EIA.

    PubMed Central

    Ross, J; Moyes, A; Young, H; McMillan, A

    1991-01-01

    AIM--The Captia Syph G enzyme immuno assay (EAI) offers the potential for the rapid automated detection of syphilis antibodies. This study was designed to assess the role of other sexually transmitted diseases (STDs) in producing false positive reactions in the Captia Syph G EIA. The role of rheumatoid factor (RF) as a potential source of false positives was also analysed. METHODS--Patients who attended a genitourinary medicine (GUM) department and gave a false positive reaction with the EIA between 1988 and 1990 were compared with women undergoing antenatal testing and with the control clinic population (EIA negative) over the same time period. The incidence of sexually transmitted disease (STD) in the clinic population and the false positive reactors was measured in relation to gonorrhoea, chlamydia, genital warts, candidiasis, "other conditions not requiring treatment" and "other conditions requiring treatment." Male: female sex ratios were also compared. Ninety two RF positive sera were analysed with the EIA. RESULTS--The rate of false positive reactions did not differ with respect to the diagnosis within the GUM clinic population. The antenatal group of women, however, had a lower incidence of false positive reactions than the GUM clinic group. No RF positive sera were positive on Captia Syph G EIA testing. CONCLUSIONS--There is no cross reaction between Captia Syph G EIA and any specific STD or with RF positive sera. The lower incidence of false positive reactions in antenatal women is unexplained but may be related to physiological changes associated with pregnancy. PMID:1743715

  20. 40 CFR 201.28 - Testing by railroad to determine probable compliance with the standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Testing by railroad to determine...; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.28 Testing by railroad to determine probable compliance... whether it should institute noise abatement, a railroad may take measurements on its own property at...

  1. 42 CFR 81.6 - Use of radiation dose information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Use of radiation dose information. 81.6 Section 81... Probability of Causation § 81.6 Use of radiation dose information. Determining probability of causation will require the use of radiation dose information provided to DOL by the National Institute for Occupational...

  2. 42 CFR 81.6 - Use of radiation dose information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Use of radiation dose information. 81.6 Section 81... Probability of Causation § 81.6 Use of radiation dose information. Determining probability of causation will require the use of radiation dose information provided to DOL by the National Institute for Occupational...

  3. 42 CFR 81.6 - Use of radiation dose information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Use of radiation dose information. 81.6 Section 81... Probability of Causation § 81.6 Use of radiation dose information. Determining probability of causation will require the use of radiation dose information provided to DOL by the National Institute for Occupational...

  4. 42 CFR 81.6 - Use of radiation dose information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Use of radiation dose information. 81.6 Section 81... Probability of Causation § 81.6 Use of radiation dose information. Determining probability of causation will require the use of radiation dose information provided to DOL by the National Institute for Occupational...

  5. Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…

  6. Statistical and Adaptive Signal Processing for UXO Discrimination for Next-Generation Sensor Data

    DTIC Science & Technology

    2009-09-01

    using the energies of all polarizations as features in a KNN classifier variant resulted in 100% probability of detection at a probability of false...International Conference on Acoustics, Speech , and Signal Processing, vol. V, 2005, pp. 885-888. [12] C. Kreucher, K. Kastella, and A. O. Hero

  7. Eternal inflation, bubble collisions, and the disintegration of the persistence of memory

    NASA Astrophysics Data System (ADS)

    Freivogel, Ben; Kleban, Matthew; Nicolis, Alberto; Sigurdson, Kris

    2009-08-01

    We compute the probability distribution for bubble collisions in an inflating false vacuum which decays by bubble nucleation. Our analysis generalizes previous work of Guth, Garriga, and Vilenkin to the case of general cosmological evolution inside the bubble, and takes into account the dynamics of the domain walls that form between the colliding bubbles. We find that incorporating these effects changes the results dramatically: the total expected number of bubble collisions in the past lightcone of a typical observer is N ~ γ Vf/Vi , where γ is the fastest decay rate of the false vacuum, Vf is its vacuum energy, and Vi is the vacuum energy during inflation inside the bubble. This number can be large in realistic models without tuning. In addition, we calculate the angular position and size distribution of the collisions on the cosmic microwave background sky, and demonstrate that the number of bubbles of observable angular size is NLS ~ (Ωk)1/2N, where Ωk is the curvature contribution to the total density at the time of observation. The distribution is almost exactly isotropic.

  8. A portfolio-based approach to optimize proof-of-concept clinical trials.

    PubMed

    Mallinckrodt, Craig; Molenberghs, Geert; Persinger, Charles; Ruberg, Stephen; Sashegyi, Andreas; Lindborg, Stacy

    2012-01-01

    Improving proof-of-concept (PoC) studies is a primary lever for improving drug development. Since drug development is often done by institutions that work on multiple drugs simultaneously, the present work focused on optimum choices for rates of false positive (α) and false negative (β) results across a portfolio of PoC studies. Simple examples and a newly derived equation provided conceptual understanding of basic principles regarding optimum choices of α and β in PoC trials. In examples that incorporated realistic development costs and constraints, the levels of α and β that maximized the number of approved drugs and portfolio value varied by scenario. Optimum choices were sensitive to the probability the drug was effective and to the proportion of total investment cost prior to establishing PoC. Results of the present investigation agree with previous research in that it is important to assess optimum levels of α and β. However, the present work also highlighted the need to consider cost structure using realistic input parameters relevant to the question of interest.

  9. Meta-analysis of attitudes toward damage-causing mammalian wildlife.

    PubMed

    Kansky, Ruth; Kidd, Martin; Knight, Andrew T

    2014-08-01

    Many populations of threatened mammals persist outside formally protected areas, and their survival depends on the willingness of communities to coexist with them. An understanding of the attitudes, and specifically the tolerance, of individuals and communities and the factors that determine these is therefore fundamental to designing strategies to alleviate human-wildlife conflict. We conducted a meta-analysis to identify factors that affected attitudes toward 4 groups of terrestrial mammals. Elephants (65%) elicited the most positive attitudes, followed by primates (55%), ungulates (53%), and carnivores (44%). Urban residents presented the most positive attitudes (80%), followed by commercial farmers (51%) and communal farmers (26%). A tolerance to damage index showed that human tolerance of ungulates and primates was proportional to the probability of experiencing damage while elephants elicited tolerance levels higher than anticipated and carnivores elicited tolerance levels lower than anticipated. Contrary to conventional wisdom, experiencing damage was not always the dominant factor determining attitudes. Communal farmers had a lower probability of being positive toward carnivores irrespective of probability of experiencing damage, while commercial farmers and urban residents were more likely to be positive toward carnivores irrespective of damage. Urban residents were more likely to be positive toward ungulates, elephants, and primates when probability of damage was low, but not when it was high. Commercial and communal farmers had a higher probability of being positive toward ungulates, primates, and elephants irrespective of probability of experiencing damage. Taxonomic bias may therefore be important. Identifying the distinct factors explaining these attitudes and the specific contexts in which they operate, inclusive of the species causing damage, will be essential for prioritizing conservation investments. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.

  10. Nothing is perfect, not even the local lymph node assay: a commentary and the implications for REACH.

    PubMed

    Basketter, David A; McFadden, John F; Gerberick, Frank; Cockshott, Amanda; Kimber, Ian

    2009-02-01

    For many regulatory authorities, the local lymph node assay (LLNA) is the preferred assay for the predictive identification of skin-sensitizing chemicals. It is the initial requirement for sensitization testing within the new REACH (Registration, Evaluation, Authorization and Restriction of Chemical substances) regulations in the European Union. The primary reasons for the preferment of the LLNA are the animal welfare benefits it provides compared with traditional guinea-pig methods (refinement and reduction of animal usage) and the general performance characteristics of the assay with regard to overall reliability, accuracy, and interpretation. Moreover, a substantial published literature on the LLNA is available making it appropriate for use as a benchmark against which new approaches, including in vitro alternatives, can be evaluated and validated. There is, therefore, a view that the LLNA represents the 'gold standard' for skin sensitization testing. However, although this is probably correct, it is important to recognize and acknowledge that in common with all other predictive tests (whether they be validated or not), the LLNA has limitations, in addition to strengths, some of which were mentioned above. Arguably, it is the limitations (e.g., the occurrence of false positive and false negative results) of test methods that are most important to understand. With respect to the LLNA, these limitations are similar to those associated with guinea-pig skin sensitization methods. Among these are the occurrence of false positive and false negative results, susceptibility of results to changes in vehicle, and the possibility that interspecies differences may confound interpretation. In this commentary, these issues are reviewed and their impact on the utility of the LLNA for identification, classification, and potency assessment of skin sensitizers are considered. In addition, their relevance for the future development and validation of novel in vitro and in silico alternatives is explored.

  11. The Diagnostic Utility of Bact/ALERT and Nested PCR in the Diagnosis of Tuberculous Meningitis.

    PubMed

    Sastry, Apurba Sankar; Bhat K, Sandhya; Kumudavathi

    2013-01-01

    The early laboratory diagnosis of Tuberculous Meningitis (TBM) is crucial, to start the antitubercular chemotherapy and to prevent its complications. However, the conventional methods are either less sensitive or time consuming. Hence, the diagnostic potentials of BacT/ALERT and Polymerase Chain Reaction (PCR) was evaluated in this study. The study group comprised of 62 cases and 33 controls. The cases were divided according to Ahuja's criteria into the confirmed (two cases), highly probable (19 cases), probable (26 cases) and the possible (15 cases) subgroups. Ziehl Neelsen's (ZN) and Auramine Phenol (AP) staining, Lowenstein Jensen (LJ) medium culture, BacT/ALERT and nested Polymerase Chain Reaction (PCR) which targeted IS6110 were carried out on all the patients. The sensitivity of the LJ culture was 3.22%. BacT/ALERT showed a sensitivity and a specificity of 25.80% and 100% and those of nested PCR were found to be 40.32% and 96.97% respectively. The mean detection time of growth of the LJ culture was 31.28 days, whereas that of BacT/ALERT was 20.68 days. The contamination rate in the LJ culture and BacT/ALERT were 7.2% and 5.8% respectively. Nested PCR was found to be more sensitive, followed by BacT/ALERT as compared to the LJ culture and smear microscopy. As both false negative and false positive results have been reported for nested PCR, so it should not be used alone as a criterion for initiating or terminating the therapy, but it should be supported by clinical, radiological, cytological and other microbiological findings.

  12. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions.

    PubMed

    Ndase, Patrick; Celum, Connie; Kidoguchi, Lara; Ronald, Allan; Fife, Kenneth H; Bukusi, Elizabeth; Donnell, Deborah; Baeten, Jared M

    2015-01-01

    Rapid HIV assays are the mainstay of HIV testing globally. Delivery of effective biomedical HIV prevention strategies such as antiretroviral pre-exposure prophylaxis (PrEP) requires periodic HIV testing. Because rapid tests have high (>95%) but imperfect specificity, they are expected to generate some false positive results. We assessed the frequency of true and false positive rapid results in the Partners PrEP Study, a randomized, placebo-controlled trial of PrEP. HIV testing was performed monthly using 2 rapid tests done in parallel with HIV enzyme immunoassay (EIA) confirmation following all positive rapid tests. A total of 99,009 monthly HIV tests were performed; 98,743 (99.7%) were dual-rapid HIV negative. Of the 266 visits with ≥1 positive rapid result, 99 (37.2%) had confirmatory positive EIA results (true positives), 155 (58.3%) had negative EIA results (false positives), and 12 (4.5%) had discordant EIA results. In the active PrEP arms, over two-thirds of visits with positive rapid test results were false positive results (69.2%, 110 of 159), although false positive results occurred at <1% (110/65,945) of total visits. When HIV prevalence or incidence is low due to effective HIV prevention interventions, rapid HIV tests result in a high number of false relative to true positive results, although the absolute number of false results will be low. Program roll-out for effective interventions should plan for quality assurance of HIV testing, mechanisms for confirmatory HIV testing, and counseling strategies for persons with positive rapid test results.

  13. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  14. Computational prediction of protein interactions related to the invasion of erythrocytes by malarial parasites.

    PubMed

    Liu, Xuewu; Huang, Yuxiao; Liang, Jiao; Zhang, Shuai; Li, Yinghui; Wang, Jun; Shen, Yan; Xu, Zhikai; Zhao, Ya

    2014-11-30

    The invasion of red blood cells (RBCs) by malarial parasites is an essential step in the life cycle of Plasmodium falciparum. Human-parasite surface protein interactions play a critical role in this process. Although several interactions between human and parasite proteins have been discovered, the mechanism related to invasion remains poorly understood because numerous human-parasite protein interactions have not yet been identified. High-throughput screening experiments are not feasible for malarial parasites due to difficulty in expressing the parasite proteins. Here, we performed computational prediction of the PPIs involved in malaria parasite invasion to elucidate the mechanism by which invasion occurs. In this study, an expectation maximization algorithm was used to estimate the probabilities of domain-domain interactions (DDIs). Estimates of DDI probabilities were then used to infer PPI probabilities. We found that our prediction performance was better than that based on the information of D. melanogaster alone when information related to the six species was used. Prediction performance was assessed using protein interaction data from S. cerevisiae, indicating that the predicted results were reliable. We then used the estimates of DDI probabilities to infer interactions between 490 parasite and 3,787 human membrane proteins. A small-scale dataset was used to illustrate the usability of our method in predicting interactions between human and parasite proteins. The positive predictive value (PPV) was lower than that observed in S. cerevisiae. We integrated gene expression data to improve prediction accuracy and to reduce false positives. We identified 80 membrane proteins highly expressed in the schizont stage by fast Fourier transform method. Approximately 221 erythrocyte membrane proteins were identified using published mass spectral datasets. A network consisting of 205 interactions was predicted. Results of network analysis suggest that SNARE proteins of parasites and APP of humans may function in the invasion of RBCs by parasites. We predicted a small-scale PPI network that may be involved in parasite invasion of RBCs by integrating DDI information and expression profiles. Experimental studies should be conducted to validate the predicted interactions. The predicted PPIs help elucidate the mechanism of parasite invasion and provide directions for future experimental investigations.

  15. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  16. Memory for media: investigation of false memories for negatively and positively charged public events.

    PubMed

    Porter, Stephen; Taylor, Kristian; Ten Brinke, Leanne

    2008-01-01

    Despite a large body of false memory research, little has addressed the potential influence of an event's emotional content on susceptibility to false recollections. The Paradoxical Negative Emotion (PNE) hypothesis predicts that negative emotion generally facilitates memory but also heightens susceptibility to false memories. Participants were asked whether they could recall 20 "widely publicised" public events (half fictitious) ranging in emotional valence, with or without visual cues. Participants recalled a greater number of true negative events (M=3.31/5) than true positive (M=2.61/5) events. Nearly everyone (95%) came to recall at least one false event (M=2.15 false events recalled). Further, more than twice as many participants recalled any false negative (90%) compared to false positive (41.7%) events. Negative events, in general, were associated with more detailed memories and false negative event memories were more detailed than false positive event memories. Higher dissociation scores were associated with false recollections of negative events, specifically.

  17. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  18. Generalized site occupancy models allowing for false positive and false negative errors

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2006-01-01

    Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.

  19. Comparison of 4th-Generation HIV Antigen/Antibody Combination Assay With 3rd-Generation HIV Antibody Assays for the Occurrence of False-Positive and False-Negative Results.

    PubMed

    Muthukumar, Alagarraju; Alatoom, Adnan; Burns, Susan; Ashmore, Jerry; Kim, Anne; Emerson, Brian; Bannister, Edward; Ansari, M Qasim

    2015-01-01

    To assess the false-positive and false-negative rates of a 4th-generation human immunodeficiency virus (HIV) assay, the Abbott ARCHITECT, vs 2 HIV 3rd-generation assays, the Siemens Centaur and the Ortho-Clinical Diagnostics Vitros. We examined 123 patient specimens. In the first phase of the study, we compared 99 specimens that had a positive screening result via the 3rd-generation Vitros assay (10 positive, 82 negative, and 7 indeterminate via confirmatory immunofluorescent assay [IFA]/Western blot [WB] testing). In the second phase, we assessed 24 HIV-1 RNA-positive (positive result via the nuclear acid amplification test [NAAT] and negative/indeterminate results via the WB test) specimens harboring acute HIV infection. The 4th-generation ARCHITECT assay yielded fewer false-positive results (n = 2) than the 3rd-generation Centaur (n = 9; P = .02) and Vitros (n = 82; P <.001) assays. One confirmed positive case had a false-negative result via the Centaur assay. When specimens from the 24 patients with acute HIV-1 infection were tested, the ARCHITECT assay yielded fewer false-negative results (n = 5) than the Centaur (n = 10) (P = .13) and the other 3rd-generation tests (n = 16) (P = .002). This study indicates that the 4th-generation ARCHITECT HIV assay yields fewer false-positive and false-negative results than the 3rd-generation HIV assays we tested. Copyright© by the American Society for Clinical Pathology (ASCP).

  20. The diagnostic performance of coronary artery angiography with 64-MSCT and post 64-MSCT: systematic review and meta-analysis.

    PubMed

    Li, Min; Du, Xiang-Min; Jin, Zhi-Tao; Peng, Zhao-Hui; Ding, Juan; Li, Li

    2014-01-01

    To comprehensively investigate the diagnostic performance of coronary artery angiography with 64-MDCT and post 64-MDCT. PubMed was searched for all published studies that evaluated coronary arteries with 64-MDCT and post 64-MDCT. The clinical diagnostic role was evaluated by applying the likelihood ratios (LRs) to calculate the post-test probability based on Bayes' theorem. 91 studies that met our inclusion criteria were ultimately included in the analysis. The pooled positive and negative LRs at patient level were 8.91 (95% CI, 7.53, 10.54) and 0.02 (CI, 0.01, 0.03), respectively. For studies that did not claim that non-evaluable segments were included, the pooled positive and negative LRs were 11.16 (CI, 8.90, 14.00) and 0.01 (CI, 0.01, 0.03), respectively. For studies including uninterruptable results, the diagnostic performance decreased, with the pooled positive LR 7.40 (CI, 6.00, 9.13) and negative LR 0.02 (CI, 0.01, 0.03). The areas under the summary ROC curve were 0.98 (CI, 0.97 to 0.99) for 64-MDCT and 0.96 (CI, 0.94 to 0.98) for post 64-MDCT, respectively. For references explicitly stating that the non-assessable segments were included during analysis, a post-test probability of negative results >95% and a positive post-test probability <95% could be obtained for patients with a pre-test probability of <73% for coronary artery disease (CAD). On the other hand, when the pre-test probability of CAD was >73%, the diagnostic role was reversed, with a positive post-test probability of CAD >95% and a negative post-test probability of CAD <95%. The diagnostic performance of post 64-MDCT does not increase as compared with 64-MDCT. CTA, overall, is a test of exclusion for patients with a pre-test probability of CAD<73%, while for patients with a pre-test probability of CAD>73%, CTA is a test used to confirm the presence of CAD.

  1. Probability cueing of distractor locations: both intertrial facilitation and statistical learning mediate interference reduction.

    PubMed

    Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael

    2014-01-01

    Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.

  2. High But Not Low Probability of Gain Elicits a Positive Feeling Leading to the Framing Effect.

    PubMed

    Gosling, Corentin J; Moutier, Sylvain

    2017-01-01

    Human risky decision-making is known to be highly susceptible to profit-motivated responses elicited by the way in which options are framed. In fact, studies investigating the framing effect have shown that the choice between sure and risky options depends on how these options are presented. Interestingly, the probability of gain of the risky option has been highlighted as one of the main factors causing variations in susceptibility to the framing effect. However, while it has been shown that high probabilities of gain of the risky option systematically lead to framing bias, questions remain about the influence of low probabilities of gain. Therefore, the first aim of this paper was to clarify the respective roles of high and low probabilities of gain in the framing effect. Due to the difference between studies using a within- or between-subjects design, we conducted a first study investigating the respective roles of these designs. For both designs, we showed that trials with a high probability of gain led to the framing effect whereas those with a low probability did not. Second, as emotions are known to play a key role in the framing effect, we sought to determine whether they are responsible for such a debiasing effect of the low probability of gain. Our second study thus investigated the relationship between emotion and the framing effect depending on high and low probabilities. Our results revealed that positive emotion was related to risk-seeking in the loss frame, but only for trials with a high probability of gain. Taken together, these results support the interpretation that low probabilities of gain suppress the framing effect because they prevent the positive emotion of gain anticipation.

  3. High But Not Low Probability of Gain Elicits a Positive Feeling Leading to the Framing Effect

    PubMed Central

    Gosling, Corentin J.; Moutier, Sylvain

    2017-01-01

    Human risky decision-making is known to be highly susceptible to profit-motivated responses elicited by the way in which options are framed. In fact, studies investigating the framing effect have shown that the choice between sure and risky options depends on how these options are presented. Interestingly, the probability of gain of the risky option has been highlighted as one of the main factors causing variations in susceptibility to the framing effect. However, while it has been shown that high probabilities of gain of the risky option systematically lead to framing bias, questions remain about the influence of low probabilities of gain. Therefore, the first aim of this paper was to clarify the respective roles of high and low probabilities of gain in the framing effect. Due to the difference between studies using a within- or between-subjects design, we conducted a first study investigating the respective roles of these designs. For both designs, we showed that trials with a high probability of gain led to the framing effect whereas those with a low probability did not. Second, as emotions are known to play a key role in the framing effect, we sought to determine whether they are responsible for such a debiasing effect of the low probability of gain. Our second study thus investigated the relationship between emotion and the framing effect depending on high and low probabilities. Our results revealed that positive emotion was related to risk-seeking in the loss frame, but only for trials with a high probability of gain. Taken together, these results support the interpretation that low probabilities of gain suppress the framing effect because they prevent the positive emotion of gain anticipation. PMID:28232808

  4. False-positive alarms for bacterial screening of platelet concentrates with BacT/ALERT new-generation plastic bottles: a multicenter pilot study.

    PubMed

    Hundhausen, T; Müller, T H

    2005-08-01

    The microbial detection system BacT/ALERT (bioMérieux) is widely used to monitor bacterial contamination of platelet concentrates (PCs). Recently, the manufacturer introduced polycarbonate culture bottles and a modified pH-sensitive liquid emulsion sensor as microbial growth indicator. This reconfigured assay was investigated in a routine setting. In each of eight transfusion centers, samples from 500 consecutive PCs were monitored for 1 week. For all PCs with a positive BacT/ALERT signal, retained samples and, if available, original PC containers and concomitant red blood cell concentrates were analyzed independently. Initially BacT/ALERT-positive PCs without bacterial identification in any sample were defined as false-positive. BacT/ALERT-positive PCs with bacteria in the first sample only were called potentially positive. PCs with bacteria in the first sample and the same strain in at least one additional sample were accepted as positive. Five PCs (0.13%) were positive, 9 PCs (0.23%) were potentially positive, and 35 PCs (0.9%) were false-positive. The rate of false-positive BacT/ALERT results varied substantially between centers (<0.2%-3.2%). Tracings from false-positive cultures lacked an exponential increase of the signal during incubation. Most of these false-positives were due to malfunctioning cells in various BacT/ALERT incubation units. Careful assessment of individual tracings of samples with positive signals helps to identify malfunctioning incubation units. Their early shutdown or replacement minimizes the high rate of unrectifiable product rejects attributed to false-positive alarms and avoids unnecessary concern of doctors and patients after conversion to a reconfigured BacT/ALERT assay.

  5. Development of structure-activity relationship for metal oxide nanoparticles

    NASA Astrophysics Data System (ADS)

    Liu, Rong; Zhang, Hai Yuan; Ji, Zhao Xia; Rallo, Robert; Xia, Tian; Chang, Chong Hyun; Nel, Andre; Cohen, Yoram

    2013-05-01

    Nanomaterial structure-activity relationships (nano-SARs) for metal oxide nanoparticles (NPs) toxicity were investigated using metrics based on dose-response analysis and consensus self-organizing map clustering. The NP cellular toxicity dataset included toxicity profiles consisting of seven different assays for human bronchial epithelial (BEAS-2B) and murine myeloid (RAW 264.7) cells, over a concentration range of 0.39-100 mg L-1 and exposure time up to 24 h, for twenty-four different metal oxide NPs. Various nano-SAR building models were evaluated, based on an initial pool of thirty NP descriptors. The conduction band energy and ionic index (often correlated with the hydration enthalpy) were identified as suitable NP descriptors that are consistent with suggested toxicity mechanisms for metal oxide NPs and metal ions. The best performing nano-SAR with the above two descriptors, built with support vector machine (SVM) model and of validated robustness, had a balanced classification accuracy of ~94%. An applicability domain for the present data was established with a reasonable confidence level of 80%. Given the potential role of nano-SARs in decision making, regarding the environmental impact of NPs, the class probabilities provided by the SVM nano-SAR enabled the construction of decision boundaries with respect to toxicity classification under different acceptance levels of false negative relative to false positive predictions.Nanomaterial structure-activity relationships (nano-SARs) for metal oxide nanoparticles (NPs) toxicity were investigated using metrics based on dose-response analysis and consensus self-organizing map clustering. The NP cellular toxicity dataset included toxicity profiles consisting of seven different assays for human bronchial epithelial (BEAS-2B) and murine myeloid (RAW 264.7) cells, over a concentration range of 0.39-100 mg L-1 and exposure time up to 24 h, for twenty-four different metal oxide NPs. Various nano-SAR building models were evaluated, based on an initial pool of thirty NP descriptors. The conduction band energy and ionic index (often correlated with the hydration enthalpy) were identified as suitable NP descriptors that are consistent with suggested toxicity mechanisms for metal oxide NPs and metal ions. The best performing nano-SAR with the above two descriptors, built with support vector machine (SVM) model and of validated robustness, had a balanced classification accuracy of ~94%. An applicability domain for the present data was established with a reasonable confidence level of 80%. Given the potential role of nano-SARs in decision making, regarding the environmental impact of NPs, the class probabilities provided by the SVM nano-SAR enabled the construction of decision boundaries with respect to toxicity classification under different acceptance levels of false negative relative to false positive predictions. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr01533e

  6. 42 CFR 81.21 - Cancers requiring the use of NIOSH-IREP.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Cancers requiring the use of NIOSH-IREP. 81.21... Probability of Causation § 81.21 Cancers requiring the use of NIOSH-IREP. (a) DOL will calculate probability of causation for all cancers, except chronic lymphocytic leukemia as provided under § 81.30, using...

  7. 42 CFR 81.21 - Cancers requiring the use of NIOSH-IREP.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Cancers requiring the use of NIOSH-IREP. 81.21... Probability of Causation § 81.21 Cancers requiring the use of NIOSH-IREP. (a) DOL will calculate probability of causation for all cancers, except chronic lymphocytic leukemia as provided under § 81.30, using...

  8. 40 CFR 280.40 - General requirements for all UST systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...

  9. 40 CFR 280.40 - General requirements for all UST systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...

  10. 40 CFR 280.40 - General requirements for all UST systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...

  11. 40 CFR 280.40 - General requirements for all UST systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...

  12. 40 CFR 280.40 - General requirements for all UST systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...

  13. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    ERIC Educational Resources Information Center

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  14. Is there a positive bias in false recognition? Evidence from confabulating amnesia patients.

    PubMed

    Alkathiri, Nura H; Morris, Robin G; Kopelman, Michael D

    2015-10-01

    Although there is some evidence for a positive emotional bias in the content of confabulations in brain damaged patients, findings have been inconsistent. The present study used the semantic-associates procedure to induce false recall and false recognition in order to examine whether a positive bias would be found in confabulating amnesic patients, relative to non-confabulating amnesic patients and healthy controls. Lists of positive, negative and neutral words were presented in order to induce false recall or false recognition of non-presented (but semantically associated) words. The latter were termed 'critical intrusions'. Thirteen confabulating amnesic patients, 13 non-confabulating amnesic patients and 13 healthy controls were investigated. Confabulating patients falsely recognised a higher proportion of positive (but unrelated) words, compared with non-confabulating patients and healthy controls. No differences were found for recall memory. Signal detection analysis, however, indicated that the positive bias for false recognition memory might reflect weaker memory in the confabulating amnesic group. This suggested that amnesia patients with weaker memory are more likely to confabulate and the content of these confabulations are more likely to be positive. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Study of false positives in 5-ALA induced photodynamic diagnosis of bladder carcinoma

    NASA Astrophysics Data System (ADS)

    Draga, Ronald O. P.; Grimbergen, Matthijs C. M.; Kok, Esther T.; Jonges, Trudy G. N.; Bosch, J. L. H. R.

    2009-02-01

    Photodynamic diagnosis (PDD) is a technique that enhances the detection of tumors during cystoscopy using a photosensitizer which accumulates primarily in cancerous cells and will fluoresce when illuminated by violetblue light. A disadvantage of PDD is the relatively low specificity. In this retrospective study we aimed to identify predictors for false positive findings in PDD. Factors such as gender, age, recent transurethral resection of bladder tumors (TURBT), previous intravesical therapy (IVT) and urinary tract infections (UTIs) were examined for association with the false positive rates in a multivariate analysis. Data of 366 procedures and 200 patients were collected. Patients were instilled with 5-aminolevulinic acid (5-ALA) intravesically and 1253 biopsies were taken from tumors and suspicious lesions. Female gender and TURBT are independent predictors of false positives in PDD. However, previous intravesical therapy with Bacille Calmette-Guérin is also an important predictor of false positives. The false positive rate decreases during the first 9-12 weeks after the latest TURBT and the latest intravesical chemotherapy. Although shortly after IVT and TURBT false positives increase, PDD improves the diagnostic sensitivity and results in more adequate treatment strategies in a significant number of patients.

  16. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  17. Approach to testing growth hormone (GH) secretion in obese subjects.

    PubMed

    Popovic, Vera

    2013-05-01

    Identification of adults with GH deficiency (GHD) is challenging because clinical features of adult GHD are not distinctive and because clinical suspicion must be confirmed by biochemical tests. Adults are selected for testing for adult GHD if they have a high pretest probability of GHD, ie, if they have hypothalamic-pituitary disease, if they have received cranial irradiation or central nervous system tumor treatment, or if they survived traumatic brain injury or subarachnoid hemorrhage. Testing should only be carried out if a decision has already been made that if deficiency is found it will be treated. There are many pharmacological GH stimulation tests for the diagnosis of GHD; however, none fulfill the requirements for an ideal test having high discriminatory power; being reproducible, safe, convenient, and economical; and not being dependent on confounding factors such as age, gender, nutritional status, and in particular obesity. In obesity, GH secretion is reduced, GH clearance is enhanced, and stimulated GH secretion is reduced, causing a false-positive result. This functional hyposomatotropism in obesity is fully reversed by weight loss. In conclusion, GH stimulation tests should be avoided in obese subjects with very low pretest probability.

  18. A new method for detecting small and dim targets in starry background

    NASA Astrophysics Data System (ADS)

    Yao, Rui; Zhang, Yanning; Jiang, Lei

    2011-08-01

    Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.

  19. The health system impact of false positive newborn screening results for medium-chain acyl-CoA dehydrogenase deficiency: a cohort study.

    PubMed

    Karaceper, Maria D; Chakraborty, Pranesh; Coyle, Doug; Wilson, Kumanan; Kronick, Jonathan B; Hawken, Steven; Davies, Christine; Brownell, Marni; Dodds, Linda; Feigenbaum, Annette; Fell, Deshayne B; Grosse, Scott D; Guttmann, Astrid; Laberge, Anne-Marie; Mhanni, Aizeddin; Miller, Fiona A; Mitchell, John J; Nakhla, Meranda; Prasad, Chitra; Rockman-Greenberg, Cheryl; Sparkes, Rebecca; Wilson, Brenda J; Potter, Beth K

    2016-02-03

    There is no consensus in the literature regarding the impact of false positive newborn screening results on early health care utilization patterns. We evaluated the impact of false positive newborn screening results for medium-chain acyl-CoA dehydrogenase deficiency (MCADD) in a cohort of Ontario infants. The cohort included all children who received newborn screening in Ontario between April 1, 2006 and March 31, 2010. Newborn screening and diagnostic confirmation results were linked to province-wide health care administrative datasets covering physician visits, emergency department visits, and inpatient hospitalizations, to determine health service utilization from April 1, 2006 through March 31, 2012. Incidence rate ratios (IRRs) were used to compare those with false positive results for MCADD to those with negative newborn screening results, stratified by age at service use. We identified 43 infants with a false positive newborn screening result for MCADD during the study period. These infants experienced significantly higher rates of physician visits (IRR: 1.42) and hospitalizations (IRR: 2.32) in the first year of life relative to a screen negative cohort in adjusted analyses. Differences in health services use were not observed after the first year of life. The higher use of some health services among false positive infants during the first year of life may be explained by a psychosocial impact of false positive results on parental perceptions of infant health, and/or by differences in underlying health status. Understanding the impact of false positive newborn screening results can help to inform newborn screening programs in designing support and education for families. This is particularly important as additional disorders are added to expanded screening panels, yielding important clinical benefits for affected children but also a higher frequency of false positive findings.

  20. Logic, probability, and human reasoning.

    PubMed

    Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P

    2015-04-01

    This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Small-target leak detection for a closed vessel via infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  2. Analysis of false results in a series of 835 fine needle aspirates of breast lesions.

    PubMed

    Willis, S L; Ramzy, I

    1995-01-01

    To analyze cases of false diagnoses from a large series to help increase the accuracy of fine needle aspiration of palpable breast lesions. The results of FNA of 835 palpable breast lesions were analyzed to determine the reasons for false positive, false negative and false suspicious diagnoses. Of the 835 aspirates, 174 were reported as positive, 549 as negative and 66 as suspicious or atypical but not diagnostic of malignancy. Forty-six cases were considered unsatisfactory. Tissue was available for comparison in 286 cases. The cytologic diagnoses in these cases were reported as follows: positive, 125 (43.7%); suspicious, 33 (11.5%); atypical, 18 (6.2%); negative, 92 (32%); and unsatisfactory, 18 (6.2%). There was one false positive diagnosis, yielding a false positive rate of 0.8%. This lesion was a case of fibrocystic change with hyperplasia, focal fat necrosis and reparative atypia. There were 14 false negative cases, resulting in a false negative rate of 13.2%. Nearly all these cases were sampling errors and included infiltrating ductal carcinomas (9), ductal carcinomas in situ (2), infiltrating lobular carcinomas (2) and tubular carcinoma (1). Most of the suspicious and atypical lesions proved to be carcinomas (35/50). The remainder were fibroadenomas (6), fibrocystic change (4), gynecomastia (2), adenosis (2) and granulomatous mastitis (1). A positive diagnosis of malignancy by FNA is reliable in establishing the diagnosis and planning the treatment of breast cancer. The false-positive rate is very low, with only a single case reported in 835 aspirates. Most false negatives are due to sampling and not to interpretive difficulties. The category "suspicious but not diagnostic of malignancy" serves a useful purpose in management of patients with breast lumps.

  3. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.

  4. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    USGS Publications Warehouse

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.

  5. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia.

    PubMed

    Mavromoustakos, Elena; Clark, Gavin I; Rock, Adam J

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined.

  6. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia

    PubMed Central

    Mavromoustakos, Elena; Clark, Gavin I.; Rock, Adam J.

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined. PMID:27557054

  7. [Predictive factors of contamination in a blood culture with bacterial growth in an Emergency Department].

    PubMed

    Hernández-Bou, S; Trenchs Sainz de la Maza, V; Esquivel Ojeda, J N; Gené Giralt, A; Luaces Cubells, C

    2015-06-01

    The aim of this study is to identify predictive factors of bacterial contamination in positive blood cultures (BC) collected in an emergency department. A prospective, observational and analytical study was conducted on febrile children aged on to 36 months, who had no risk factors of bacterial infection, and had a BC collected in the Emergency Department between November 2011 and October 2013 in which bacterial growth was detected. The potential BC contamination predicting factors analysed were: maximum temperature, time to positivity, initial Gram stain result, white blood cell count, absolute neutrophil count, band count, and C-reactive protein (CRP). Bacteria grew in 169 BC. Thirty (17.8%) were finally considered true positives and 139 (82.2%) false positives. All potential BC contamination predicting factors analysed, except maximum temperature, showed significant differences between true positives and false positives. CRP value, time to positivity, and initial Gram stain result are the best predictors of false positives in BC. The positive predictive values of a CRP value≤30mg/L, BC time to positivity≥16h, and initial Gram stain suggestive of a contaminant in predicting a FP, are 95.1, 96.9 and 97.5%, respectively. When all 3 conditions are applied, their positive predictive value is 100%. Four (8.3%) patients with a false positive BC and discharged to home were revaluated in the Emergency Department. The majority of BC obtained in the Emergency Department that showed positive were finally considered false positives. Initial Gram stain, time to positivity, and CRP results are valuable diagnostic tests in distinguishing between true positives and false positives in BC. The early detection of false positives will allow minimising their negative consequences. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  8. Assessing the clinical benefit of nuclear matrix protein 22 in the surveillance of patients with nonmuscle-invasive bladder cancer and negative cytology: a decision-curve analysis.

    PubMed

    Shariat, Shahrokh F; Savage, Caroline; Chromecki, Thomas F; Sun, Maxine; Scherr, Douglas S; Lee, Richard K; Lughezzani, Giovanni; Remzi, Mesut; Marberger, Michael J; Karakiewicz, Pierre I; Vickers, Andrew J

    2011-07-01

    Several studies have demonstrated that abnormal levels of nuclear matrix protein 22 (NMP22) are associated with bladder cancer and have led to the approval of NMP22 as a urinary biomarker by the US Food and Drug Administration. Nonetheless, the clinical significance of NMP22 remains unclear. The objective of this study was to use decision analysis to determine whether NMP22 improves medical decision-making. The current study included 2222 patients who had a history of nonmuscle-invasive bladder cancer and current negative cytology. The authors developed models to predict cancer recurrence or progression to muscle-invasive disease using voided NMP22 levels, cystoscopy, age, and sex. Clinical net benefit was calculated by summing the benefits (true-positives), subtracting the harms (false-positives), and weighting these values by the threshold probability at which a patient or clinician would opt for cytoscopy. After cystoscopy, 581 patients (26%) had cancer identified. The NMP22 level was associated significantly with bladder cancer recurrence and progression (P < .001 for both). The use of NMP22 in a model with age and sex was associated with better patient outcomes than performing cystoscopy on everyone and produced threshold probabilities > 8% for recurrence and > 3% for progression. Only offering cystoscopy to those who had a risk > 15% reduced the number of cystoscopies by 229 while missing only 25 cancer recurrences per 1000 men with negative cytology. The current study was limited by its multicenter design. For clinicians who would perform a cystoscopy at a threshold of 5% for recurrence or 1% for progression, NMP22 did not aid clinical decision-making. For less risk-averse clinicians who would only perform a cystoscopy at a threshold probability >thinsp;8% for recurrence or > 3% for progression, NMP22 helped to indicate which patients required cystoscopy and which could be spared this procedure. Copyright © 2011 American Cancer Society.

  9. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    PubMed

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.

  10. Interpreting the results of the Semmes-Weinstein monofilament test: accounting for false-positive answers in the international consensus on the diabetic foot protocol by a new model.

    PubMed

    Slater, Robert A; Koren, Shlomit; Ramot, Yoram; Buchs, Andreas; Rapoport, Micha J

    2014-01-01

    The Semmes-Weinstein monofilament is the most widely used test to diagnose the loss of protective sensation. The commonly used protocol of the International Consensus on the Diabetic Foot includes a 'sham' application that allows for false-positive answers. We sought to study the heretofore unexamined significance of false-positive answers. Forty-five patients with diabetes and a history of pedal ulceration (Group I) and 81 patients with diabetes but no history of ulceration (Group II) were studied. The three original sites of the International Consensus on the Diabetic Foot at the hallux, 1st metatarsal and 5th metatarsal areas were used. At each location, the test was performed three times: 2 actual and 1 "sham" applications. Scores were graded from 0 to 3 based upon correct responses. Determination of loss of protective sensation was performed with and without calculating a false-positive answer as a minus 1 score. False-positive responses were found in a significant percentage of patients with and without history of ulceration. Introducing false-positive results as minus 1 into the test outcome significantly increased the number of patients diagnosed with loss of protective sensation in both groups. False-positive answers can significantly affect Semmes-Weinstein monofilament test results and the diagnosis of LOPS. A model that accounts for false-positive answers is offered. Copyright © 2013 John Wiley & Sons, Ltd.

  11. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  12. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  13. Ordering blood tests for patients with unexplained fatigue in general practice: what does it yield? Results of the VAMPIRE trial.

    PubMed

    Koch, Hèlen; van Bokhoven, Marloes A; ter Riet, Gerben; van Alphen-Jager, Jm Tineke; van der Weijden, Trudy; Dinant, Geert-Jan; Bindels, Patrick J E

    2009-04-01

    Unexplained fatigue is frequently encountered in general practice. Because of the low prior probability of underlying somatic pathology, the positive predictive value of abnormal (blood) test results is limited in such patients. The study objectives were to investigate the relationship between established diagnoses and the occurrence of abnormal blood test results among patients with unexplained fatigue; to survey the effects of the postponement of test ordering on this relationship; and to explore consultation-related determinants of abnormal test results. Cluster randomised trial. General practices of 91 GPs in the Netherlands. GPs were randomised to immediate or postponed blood-test ordering. Patients with new unexplained fatigue were included. Limited and expanded sets of blood tests were ordered either immediately or after 4 weeks. Diagnoses during the 1-year follow-up period were extracted from medical records. Two-by-two tables were generated. To establish independent determinants of abnormal test results, a multivariate logistic regression model was used. Data of 325 patients were analysed (71% women; mean age 41 years). Eight per cent of patients had a somatic illness that was detectable by blood-test ordering. The number of false-positive test results increased in particular in the expanded test set. Patients rarely re-consulted after 4 weeks. Test postponement did not affect the distribution of patients over the two-by-two tables. No independent consultation-related determinants of abnormal test results were found. Results support restricting the number of tests ordered because of the increased risk of false-positive test results from expanded test sets. Although the number of re-consulting patients was small, the data do not refute the advice to postpone blood-test ordering for medical reasons in patients with unexplained fatigue in general practice.

  14. Atrial Fibrillation Detection During 24-Hour Ambulatory Blood Pressure Monitoring: Comparison With 24-Hour Electrocardiography.

    PubMed

    Kollias, Anastasios; Destounis, Antonios; Kalogeropoulos, Petros; Kyriakoulis, Konstantinos G; Ntineri, Angeliki; Stergiou, George S

    2018-07-01

    This study assessed the diagnostic accuracy of a novel 24-hour ambulatory blood pressure (ABP) monitor (Microlife WatchBP O3 Afib) with implemented algorithm for automated atrial fibrillation (AF) detection during each ABP measurement. One hundred subjects (mean age 70.6±8.2 [SD] years; men 53%; hypertensives 85%; 17 with permanent AF; 4 paroxysmal AF; and 79 non-AF) had simultaneous 24-hour ABP monitoring and 24-hour Holter monitoring. Among a total of 6410 valid ABP readings, 1091 (17%) were taken in ECG AF rhythm. In reading-to-reading ABP analysis, the sensitivity, specificity, and accuracy of ABP monitoring in detecting AF were 93%, 87%, and 88%, respectively. In non-AF subjects, 12.8% of the 24-hour ABP readings indicated false-positive AF, of whom 27% were taken during supraventricular premature beats. There was a strong association between the proportion of false-positive AF readings and that of supraventricular premature beats ( r =0.67; P <0.001). Receiver operating characteristic curve revealed that in paroxysmal AF and non-AF subjects, AF-positive readings at 26% during 24-hour ABP monitoring had 100%/85% sensitivity/specificity (area under the curve 0.91; P <0.01) for detecting paroxysmal AF. These findings suggest that in elderly hypertensives, a novel 24-hour ABP monitor with AF detector has high sensitivity and moderate specificity for AF screening during routine ABP monitoring. Thus, in elderly hypertensives, a 24-hour ABP recording with at least 26% of the readings suggesting AF indicates a high probability for AF diagnosis and should be regarded as an indication for performing 24-hour Holter monitoring. © 2018 American Heart Association, Inc.

  15. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    PubMed

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  16. Reducing false positive incidental findings with ensemble genotyping and logistic regression-based variant filtering methods

    PubMed Central

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won

    2014-01-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188

  17. A Performance Comparison on the Probability Plot Correlation Coefficient Test using Several Plotting Positions for GEV Distribution.

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng

    2014-05-01

    It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  18. Fostering Positive Attitude in Probability Learning Using Graphing Calculator

    ERIC Educational Resources Information Center

    Tan, Choo-Kim; Harji, Madhubala Bava; Lau, Siong-Hoe

    2011-01-01

    Although a plethora of research evidence highlights positive and significant outcomes of the incorporation of the Graphing Calculator (GC) in mathematics education, its use in the teaching and learning process appears to be limited. The obvious need to revisit the teaching and learning of Probability has resulted in this study, i.e. to incorporate…

  19. Time Neutron Technique for UXO Discrimination

    DTIC Science & Technology

    2010-12-01

    mixture of TNT and RDX C-4 CFD Composition 4 military plastic explosive Constant Fraction Discriminator Cps CsI counts per second inorganic...Pdfs Probability Density Functions PET Positron Emission Tomography Pfa Probability of False Alarm PFTNA Pulsed Fast/Thermal Neutron Analysis PMTs...the ordnance type (rocket, mortar , projectile, etc.) and what filler material it contains (inert or empty), practice, HE, illumination, chemical (i.e

  20. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  1. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  2. The correlation between concentrations of zolpidem and benzodiazepines in segmental hair samples and use patterns.

    PubMed

    Kim, Hyojeong; Lee, Sangeun; In, Sanghwan; Park, Meejung; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo; Han, Eunyoung

    2018-01-01

    The aim of this study was to investigate the correlation between histories of zolpidem and benzodiazepines use and their concentrations in hair as determined by segmental hair analysis, that is, by analyzing hair samples taken 0-1, 1-2, 2-3, 3-4, 4-5, and 5-6cm etc. and 0-3cm from the scalp, and whole hair. Of the 23 hair samples examined, 18 were collected from patients in a rehabilitation program and five were from patients that had taken zolpidem only once by prescription. All 23 patients provided written informed consent after reviewing the research plan, described their zolpidem and benzodiazepines use histories accurately, and provided hair samples, which were weighed, washed, cut into lengths of <1mm, and extracted in 100% methanol for 16h (diazepam-d 5 was used as an internal standard). Extracts were evaporated under reduced pressure and reconstituted with aqueous methanol (1:1 v/v). These extracts (10μL) were analyzed by Liquid Chromatography/Tandem Mass Spectrometry (LC-MS/MS). The method used was validated by determining LOD, LOQ, calibration curves, intra- and inter-accuracies, precisions, matrix effects, process efficiencies, extraction efficiencies, and processed sample stabilities. Five hundred and ninety-five 1cm hair segments showed 61.59% positive probability and 86.71% negative probability of quality correlation between zolpidem and benzodiazepines use and concentrations in hair. Good qualitative correlations were observed between drug use and detection in hair. False positivity and false negativity were very low. Of the hair samples taken from patients in a rehabilitation program, subject nos. 4, 5, and 12 had correlation coefficients of 0.68, 0.54 and 0.71, respectively, for relationships between zolpidem use and concentration of zolpidem in hair. For the 5 patients taking only a single dose of zolpidem (10mg), the average zolpidem concentrations in hair were 20, 15 and 40pg/mg after 5, 30 and 60 days, respectively. This study shows a relationship between history of zolpidem and benzodiazepines use and their concentrations in 1cm hair segment. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.

    PubMed

    Rottman, Benjamin Margolin

    2017-02-01

    Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.

  4. The problem of false positives and false negatives in violent video game experiments.

    PubMed

    Ferguson, Christopher J

    The problem of false positives and negatives has received considerable attention in behavioral research in recent years. The current paper uses video game violence research as an example of how such issues may develop in a field. Despite decades of research, evidence on whether violent video games (VVGs) contribute to aggression in players has remained mixed. Concerns have been raised in recent years that experiments regarding VVGs may suffer from both "false positives" and "false negatives." The current paper examines this issue in three sets of video game experiments, two sets of video game experiments on aggression and prosocial behaviors identified in meta-analysis, and a third group of recent null studies. Results indicated that studies of VVGs and aggression appear to be particularly prone to false positive results. Studies of VVGs and prosocial behavior, by contrast are heterogeneous and did not demonstrate any indication of false positive results. However, their heterogeneous nature made it difficult to base solid conclusions on them. By contrast, evidence for false negatives in null studies was limited, and little evidence emerged that null studies lacked power in comparison those highlighted in past meta-analyses as evidence for effects. These results are considered in light of issues related to false positives and negatives in behavioral science more broadly. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?

    PubMed Central

    Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M. Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz

    2015-01-01

    Background Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Methodology/Principal Findings Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). Conclusion The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study. PMID:26161864

  6. Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?

    PubMed

    Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz

    2015-01-01

    Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study.

  7. Relevance of cutoff on a 4th generation ELISA performance in the false positive rate during HIV diagnostic in a low HIV prevalence setting.

    PubMed

    Chacón, Lucía; Mateos, María Luisa; Holguín, África

    2017-07-01

    Despite the high specificity of fourth-generation enzyme immunoassays (4th-gen-EIA) for screening during HIV diagnosis, their positive predictive value is low in populations with low HIV prevalence. Thus, screening should be optimized to reduce false positive results. The influence of sample cutoff (S/CO) values by a 4th-gen-EIA with the false positive rate during the routine HIV diagnosis in a low HIV prevalence population was evaluated. A total of 30,201 sera were tested for HIV diagnosis using Abbott Architect ® HIV-Ag/Ab-Combo 4th-gen-EIA at a hospital in Spain during 17 months. Architect S/CO values were recorded, comparing the HIV-1 positive results following Architect interpretation (S/CO≥1) with the final HIV-1 diagnosis by confirmatory tests (line immunoassay, LIA and/or nucleic acid test, NAT). ROC curve was also performed. Among the 30,201 HIV performed tests, 256 (0.85%) were positive according to Architect interpretation (S/CO≥1) but only 229 (0.76%) were definitively HIV-1 positive after LIA and/or NAT. Thus, 27 (10.5%) of 256 samples with S/CO≥1 by Architect were false positive diagnose. The false positive rate decreased when the S/CO ratio increased. All 19 samples with S/CO ≤10 were false positives and all 220 with S/CO>50 true HIV-positives. The optimal S/CO cutoff value provided by ROC curves was 32.7. No false negative results were found. We show that very low S/CO values during HIV-1 screening using Architect can result HIV negative after confirmation by LIA and NAT. The false positive rate is reduced when S/CO increases. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Which Factors Contribute to False-Positive, False-Negative, and Invalid Results in Fetal Fibronectin Testing in Women with Symptoms of Preterm Labor?

    PubMed

    Bruijn, Merel M C; Hermans, Frederik J R; Vis, Jolande Y; Wilms, Femke F; Oudijk, Martijn A; Kwee, Anneke; Porath, Martina M; Oei, Guid; Scheepers, Hubertina C J; Spaanderman, Marc E A; Bloemenkamp, Kitty W M; Haak, Monique C; Bolte, Antoinette C; Vandenbussche, Frank P H A; Woiski, Mallory D; Bax, Caroline J; Cornette, Jérôme M J; Duvekot, Johannes J; Bijvank, Bas W A N I J; van Eyck, Jim; Franssen, Maureen T M; Sollie, Krystyna M; van der Post, Joris A M; Bossuyt, Patrick M M; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan

    2017-02-01

    Objective  We assessed the influence of external factors on false-positive, false-negative, and invalid fibronectin results in the prediction of spontaneous delivery within 7 days. Methods  We studied symptomatic women between 24 and 34 weeks' gestational age. We performed uni- and multivariable logistic regression to estimate the effect of external factors (vaginal soap, digital examination, transvaginal sonography, sexual intercourse, vaginal bleeding) on the risk of false-positive, false-negative, and invalid results, using spontaneous delivery within 7 days as the outcome. Results  Out of 708 women, 237 (33%) had a false-positive result; none of the factors showed a significant association. Vaginal bleeding increased the proportion of positive fetal fibronectin (fFN) results, but was significantly associated with a lower risk of false-positive test results (odds ratio [OR], 0.22; 95% confidence intervals [CI], 0.12-0.39). Ten women (1%) had a false-negative result. None of the investigated factors was significantly associated with a significantly higher risk of false-negative results. Twenty-one tests (3%) were invalid; only vaginal bleeding showed a significant association (OR, 4.5; 95% CI, 1.7-12). Conclusion  The effect of external factors on the performance of qualitative fFN testing is limited, with vaginal bleeding as the only factor that reduces its validity. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Consensus-based identification of factors related to false-positives in ultrasound scanning of synovitis and tenosynovitis.

    PubMed

    Ikeda, Kei; Narita, Akihiro; Ogasawara, Michihiro; Ohno, Shigeru; Kawahito, Yutaka; Kawakami, Atsushi; Ito, Hiromu; Matsushita, Isao; Suzuki, Takeshi; Misaki, Kenta; Ogura, Takehisa; Kamishima, Tamotsu; Seto, Yohei; Nakahara, Ryuichi; Kaneko, Atsushi; Nakamura, Takayuki; Henmi, Mihoko; Fukae, Jun; Nishida, Keiichiro; Sumida, Takayuki; Koike, Takao

    2016-01-01

    We aimed to identify causes of false-positives in ultrasound scanning of synovial/tenosynovial/bursal inflammation and provide corresponding imaging examples. We first performed systematic literature review to identify previously reported causes of false-positives. We next determined causes of false-positives and corresponding example images for educational material through Delphi exercises and discussion by 15 experts who were an instructor and/or a lecturer in the 2013 advanced course for musculoskeletal ultrasound organized by Japan College of Rheumatology Committee for the Standardization of Musculoskeletal Ultrasonography. Systematic literature review identified 11 articles relevant to sonographic false-positives of synovial/tenosynovial inflammation. Based on these studies, 21 candidate causes of false-positives were identified in the consensus meeting. Of these items, 11 achieved a predefined consensus (≥ 80%) in Delphi exercise and were classified as follows: (I) Gray-scale assessment [(A) non-specific synovial findings and (B) normal anatomical structures which can mimic synovial lesions due to either their low echogenicity or anisotropy]; (II) Doppler assessment [(A) Intra-articular normal vessels and (B) reverberation)]. Twenty-four corresponding examples with 49 still and 23 video images also achieved consensus. Our study provides a set of representative images that can help sonographers to understand false-positives in ultrasound scanning of synovitis and tenosynovitis.

  10. Skin irritation, false positives and the local lymph node assay: a guideline issue?

    PubMed

    Basketter, David A; Kimber, Ian

    2011-10-01

    Since the formal validation and regulatory acceptance of the local lymph node assay (LLNA) there have been commentaries suggesting that the irritant properties of substances can give rise to false positives. As toxicology aspires to progress rapidly towards the age of in vitro alternatives, it is of increasing importance that issues relating to assay selectivity and performance are understood fully, and that true false positive responses are distinguished clearly from those that are simply unpalatable. In the present review, we have focused on whether skin irritation per se is actually a direct cause of true false positive results in the LLNA. The body of published work has been examined critically and considered in relation to our current understanding of the mechanisms of skin irritation and skin sensitisation. From these analyses it is very clear that, of itself, skin irritation is not a cause of false positive results. The corollary is, therefore, that limiting test concentrations in the LLNA for the purpose of avoiding skin irritation may lead, unintentionally, to false negatives. Where a substance is a true false positive in the LLNA, the classic example being sodium lauryl sulphate, explanations for that positivity will have to reach beyond the seductive, but incorrect, recourse to its skin irritation potential. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Hypertension is strongly associated with false-positive bicycle exercise stress echocardiography testing results.

    PubMed

    Keller, Karsten; Stelzer, Kathrin; Munzel, Thomas; Ostad, Mir Abolfazl

    2016-12-01

    Exercise echocardiography is a reliable routine test in patients with known or suspected coronary artery disease. However, in ∼15% of all patients, stress echocardiography leads to false-positive stress echocardiography results. We aimed to investigate the impact of hypertension on stress echocardiographic results. We performed a retrospective study of patients with suspected or known stable coronary artery disease who underwent a bicycle exercise stress echocardiography. Patients with false-positive stress results were compared with those with appropriate results. 126 patients with suspected or known coronary artery disease were included in this retrospective study. 23 patients showed false-positive stress echocardiography results. Beside comparable age, gender distribution and coronary artery status, hypertension was more prevalent in patients with false-positive stress results (95.7% vs. 67.0%, p = 0.0410). Exercise peak load revealed a borderline-significance with lower loads in patients with false-positive results (100.0 (IQR 75.0/137.5) vs. 125.0 (100.0/150.0) W, p = 0.0601). Patients with false-positive stress results showed higher systolic (2.05 ± 0.69 vs. 1.67 ± 0.39 mmHg/W, p = 0.0193) and diastolic (1.03 ± 0.38 vs. 0.80 ± 0.28 mmHg/W, p = 0.0165) peak blood pressure (BP) per wattage. In a multivariate logistic regression test, hypertension (OR 17.6 [CI 95% 1.9-162.2], p = 0.0115), and systolic (OR 4.12 [1.56-10.89], p = 0.00430) and diastolic (OR 13.74 [2.46-76.83], p = 0.00285) peak BP per wattage, were associated with false-positive exercise results. ROC analysis for systolic and diastolic peak BP levels per wattage showed optimal cut-off values of 1.935mmHg/W and 0.823mmHg/W, indicating false-positive exercise echocardiographic results with AUCs of 0.660 and 0.664, respectively. Hypertension is a risk factor for false-positive stress exercise echocardiographic results in patients with known or suspected coronary artery disease. Presence of hypertension was associated with 17.6-fold elevated risk of false-positive results.

  12. PHYCAA+: an optimized, adaptive procedure for measuring and controlling physiological noise in BOLD fMRI.

    PubMed

    Churchill, Nathan W; Strother, Stephen C

    2013-11-15

    The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Prospective evaluation of an automated method to identify patients with severe sepsis or septic shock in the emergency department.

    PubMed

    Brown, Samuel M; Jones, Jason; Kuttler, Kathryn Gibb; Keddington, Roger K; Allen, Todd L; Haug, Peter

    2016-08-22

    Sepsis is an often-fatal syndrome resulting from severe infection. Rapid identification and treatment are critical for septic patients. We therefore developed a probabilistic model to identify septic patients in the emergency department (ED). We aimed to produce a model that identifies 80 % of sepsis patients, with no more than 15 false positive alerts per day, within one hour of ED admission, using routine clinical data. We developed the model using retrospective data for 132,748 ED encounters (549 septic), with manual chart review to confirm cases of severe sepsis or septic shock from January 2006 through December 2008. A naïve Bayes model was used to select model features, starting with clinician-proposed candidate variables, which were then used to calculate the probability of sepsis. We evaluated the accuracy of the resulting model in 93,733 ED encounters from April 2009 through June 2010. The final model included mean blood pressure, temperature, age, heart rate, and white blood cell count. The area under the receiver operating characteristic curve (AUC) for the continuous predictor model was 0.953. The binary alert achieved 76.4 % sensitivity with a false positive rate of 4.7 %. We developed and validated a probabilistic model to identify sepsis early in an ED encounter. Despite changes in process, organizational focus, and the H1N1 influenza pandemic, our model performed adequately in our validation cohort, suggesting that it will be generalizable.

  14. VALFAST: Secure Probabilistic Validation of Hundreds of Kepler Planet Candidates

    NASA Astrophysics Data System (ADS)

    Morton, Tim; Petigura, E.; Johnson, J. A.; Howard, A.; Marcy, G. W.; Baranec, C.; Law, N. M.; Riddle, R. L.; Ciardi, D. R.; Robo-AO Team

    2014-01-01

    The scope, scale, and tremendous success of the Kepler mission has necessitated the rapid development of probabilistic validation as a new conceptual framework for analyzing transiting planet candidate signals. While several planet validation methods have been independently developed and presented in the literature, none has yet come close to addressing the entire Kepler survey. I present the results of applying VALFAST---a planet validation code based on the methodology described in Morton (2012)---to every Kepler Object of Interest. VALFAST is unique in its combination of detail, completeness, and speed. Using the transit light curve shape, realistic population simulations, and (optionally) diverse follow-up observations, it calculates the probability that a transit candidate signal is the result of a true transiting planet or any of a number of astrophysical false positive scenarios, all in just a few minutes on a laptop computer. In addition to efficiently validating the planetary nature of hundreds of new KOIs, this broad application of VALFAST also demonstrates its ability to reliably identify likely false positives. This extensive validation effort is also the first to incorporate data from all of the largest Kepler follow-up observing efforts: the CKS survey of ~1000 KOIs with Keck/HIRES, the Robo-AO survey of >1700 KOIs, and high-resolution images obtained through the Kepler Follow-up Observing Program. In addition to enabling the core science that the Kepler mission was designed for, this methodology will be critical to obtain statistical results from future surveys such as TESS and PLATO.

  15. False Recognition in Behavioral Variant Frontotemporal Dementia and Alzheimer's Disease-Disinhibition or Amnesia?

    PubMed

    Flanagan, Emma C; Wong, Stephanie; Dutt, Aparna; Tu, Sicong; Bertoux, Maxime; Irish, Muireann; Piguet, Olivier; Rao, Sulakshana; Hodges, John R; Ghosh, Amitabha; Hornberger, Michael

    2016-01-01

    Episodic memory recall processes in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) can be similarly impaired, whereas recognition performance is more variable. A potential reason for this variability could be false-positive errors made on recognition trials and whether these errors are due to amnesia per se or a general over-endorsement of recognition items regardless of memory. The current study addressed this issue by analysing recognition performance on the Rey Auditory Verbal Learning Test (RAVLT) in 39 bvFTD, 77 AD and 61 control participants from two centers (India, Australia), as well as disinhibition assessed using the Hayling test. Whereas both AD and bvFTD patients were comparably impaired on delayed recall, bvFTD patients showed intact recognition performance in terms of the number of correct hits. However, both patient groups endorsed significantly more false-positives than controls, and bvFTD and AD patients scored equally poorly on a sensitivity index (correct hits-false-positives). Furthermore, measures of disinhibition were significantly associated with false positives in both groups, with a stronger relationship with false-positives in bvFTD. Voxel-based morphometry analyses revealed similar neural correlates of false positive endorsement across bvFTD and AD, with both patient groups showing involvement of prefrontal and Papez circuitry regions, such as medial temporal and thalamic regions, and a DTI analysis detected an emerging but non-significant trend between false positives and decreased fornix integrity in bvFTD only. These findings suggest that false-positive errors on recognition tests relate to similar mechanisms in bvFTD and AD, reflecting deficits in episodic memory processes and disinhibition. These findings highlight that current memory tests are not sufficient to accurately distinguish between bvFTD and AD patients.

  16. Radar detection with the Neyman-Pearson criterion using supervised-learning-machines trained with the cross-entropy error

    NASA Astrophysics Data System (ADS)

    Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel

    2013-12-01

    The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.

  17. Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems

    NASA Astrophysics Data System (ADS)

    Une, Masashi; Otsuka, Akira; Imai, Hideki

    This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.

  18. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  19. False-positive buprenorphine EIA urine toxicology results due to high dose morphine: a case report.

    PubMed

    Tenore, Peter L

    2012-01-01

    In monitoring a patient with chronic pain who was taking high-dose morphine and oxycodone with weekly urine enzymatic immunoassay (EIA) toxicology testing, the authors noted consistent positives for buprenorphine. The patient was not taking buprenorphine, and gas chromatography/mass spectroscopy (GCMS) testing on multiple samples revealed no buprenorphine, indicating a case of false-positive buprenorphine EIAs in a high-dose opiate case. The authors discontinued oxycodone for a period of time and then discontinued morphine. Urine monitoring with EIAs and GCMS revealed false-positive buprenorphine EIAs, which remained only when the patient was taking morphine. When taking only oxycodone and no morphine, urine samples became buprenorphine negative. When morphine was reintroduced, false-positive buprenorphine results resumed. Medical practitioners should be aware that high-dose morphine (with morphine urine levels turning positive within the 15,000 to 28,000 mg/mL range) may produce false-positive buprenorphine EIAs with standard urine EIA toxicology testing.

  20. Evidence of a false thumb in a fossil carnivore clarifies the evolution of pandas

    PubMed Central

    Salesa, Manuel J.; Antón, Mauricio; Peigné, Stéphane; Morales, Jorge

    2006-01-01

    The “false thumb” of pandas is a carpal bone, the radial sesamoid, which has been enlarged and functions as an opposable thumb. If the giant panda (Ailuropoda melanoleuca) and the red panda (Ailurus fulgens) are not closely related, their sharing of this adaptation implies a remarkable convergence. The discovery of previously unknown postcranial remains of a Miocene red panda relative, Simocyon batalleri, from the Spanish site of Batallones-1 (Madrid), now shows that this animal had a false thumb. The radial sesamoid of S. batalleri shows similarities with that of the red panda, which supports a sister-group relationship and indicates independent evolution in both pandas. The fossils from Batallones-1 reveal S. batalleri as a puma-sized, semiarboreal carnivore with a moderately hypercarnivore diet. These data suggest that the false thumbs of S. batalleri and Ailurus fulgens were probably inherited from a primitive member of the red panda family (Ailuridae), which lacked the red panda's specializations for herbivory but shared its arboreal adaptations. Thus, it seems that, whereas the false thumb of the giant panda probably evolved for manipulating bamboo, the false thumbs of the red panda and of S. batalleri more likely evolved as an aid for arboreal locomotion, with the red panda secondarily developing its ability for item manipulation and thus producing one of the most dramatic cases of convergence among vertebrates. PMID:16387860

  1. Evidence of a false thumb in a fossil carnivore clarifies the evolution of pandas.

    PubMed

    Salesa, Manuel J; Antón, Mauricio; Peigné, Stéphane; Morales, Jorge

    2006-01-10

    The "false thumb" of pandas is a carpal bone, the radial sesamoid, which has been enlarged and functions as an opposable thumb. If the giant panda (Ailuropoda melanoleuca) and the red panda (Ailurus fulgens) are not closely related, their sharing of this adaptation implies a remarkable convergence. The discovery of previously unknown postcranial remains of a Miocene red panda relative, Simocyon batalleri, from the Spanish site of Batallones-1 (Madrid), now shows that this animal had a false thumb. The radial sesamoid of S. batalleri shows similarities with that of the red panda, which supports a sister-group relationship and indicates independent evolution in both pandas. The fossils from Batallones-1 reveal S. batalleri as a puma-sized, semiarboreal carnivore with a moderately hypercarnivore diet. These data suggest that the false thumbs of S. batalleri and Ailurus fulgens were probably inherited from a primitive member of the red panda family (Ailuridae), which lacked the red panda's specializations for herbivory but shared its arboreal adaptations. Thus, it seems that, whereas the false thumb of the giant panda probably evolved for manipulating bamboo, the false thumbs of the red panda and of S. batalleri more likely evolved as an aid for arboreal locomotion, with the red panda secondarily developing its ability for item manipulation and thus producing one of the most dramatic cases of convergence among vertebrates.

  2. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  3. Diurnal rhythm and concordance between objective and subjective hot flashes: the Hilo Women's Health Study.

    PubMed

    Sievert, Lynnette L; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E

    2010-01-01

    The aims of this study were to test for a diurnal pattern in hot flashes in a multiethnic population living in a hot, humid environment and to examine the rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Study participants aged 45 to 55 years were recruited from the general population of Hilo, HI. Women wore a Biolog hot flash monitor (UFI, Morro Bay, CA), kept a diary for 24 hours, and also participated in 3-hour laboratory measures (n = 199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true-positive readings. Subjective hot flashes were considered the standard for computing false-positive and false-negative readings. True-positive, false-positive, and false-negative readings were compared across ethnic groups by chi analyses. Frequencies of sternal, nuchal, and subjective hot flashes peaked at 1500 +/- 1 hours with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true-positive measures, but nonsternal electrodes picked up more false-positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true-positive or false-positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false-negative measures). The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo were similar to results from more temperate environments. Lack of variation in sternal versus nonsternal measures and in true-positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns.

  4. Diurnal rhythm and concordance between objective and subjective hot flashes: The Hilo Women’s Health Study

    PubMed Central

    Sievert, Lynnette L.; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E.

    2010-01-01

    Objective To test for a diurnal pattern in hot flashes in a multi-ethnic population living in a hot, humid environment. To examine rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Methods Study participants aged 45–55 were recruited from the general population of Hilo, Hawaii. Women wore a Biolog hot flash monitor, kept a diary for 24-hours, and also participated in 3-hour laboratory measures (n=199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true positive readings. Subjective hot flashes were considered the standard for computing false positive and false negative readings. True positive, false positive, and false negative readings were compared across ethnic groups by chi-square analyses. Results Frequencies of sternal, nuchal and subjective hot flashes peaked at 15:00 ± 1 hour with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true positive measures, but non-sternal electrodes picked up more false positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true positive or false positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false negative measures). Conclusions The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo was similar to results from more temperate environments. Lack of variation in sternal vs. non-sternal measures, and in true positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns. PMID:20220538

  5. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Ignoring Intermarker Linkage Disequilibrium Induces False-Positive Evidence of Linkage for Consanguineous Pedigrees when Genotype Data Is Missing for Any Pedigree Member

    PubMed Central

    Li, Bingshan; Leal, Suzanne M.

    2008-01-01

    Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al. [1] that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for autosomal recessive consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. False-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage, and which family members aid in its reduction, is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. For the situation, when parental genotypes are unavailable, false-positive evidence for linkage can be reduced by including genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents in the analysis. PMID:18073490

  7. Forward Association, Backward Association, and the False-Memory Illusion

    ERIC Educational Resources Information Center

    Brainerd, C. J.; Wright, Ron

    2005-01-01

    In the Deese-Roediger-McDermott false-memory illusion, forward associative strength (FAS) is unrelated to the strength of the illusion; this is puzzling, because high-FAS lists ought to share more semantic features with critical unpresented words than should low-FAS lists. The authors show that this null result is probably a truncated range…

  8. Biological false-positive venereal disease research laboratory test in cerebrospinal fluid in the diagnosis of neurosyphilis - a case-control study.

    PubMed

    Zheng, S; Lin, R J; Chan, Y H; Ngan, C C L

    2018-03-01

    There is no clear consensus on the diagnosis of neurosyphilis. The Venereal Disease Research Laboratory (VDRL) test from cerebrospinal fluid (CSF) has traditionally been considered the gold standard for diagnosing neurosyphilis but is widely known to be insensitive. In this study, we compared the clinical and laboratory characteristics of true-positive VDRL-CSF cases with biological false-positive VDRL-CSF cases. We retrospectively identified cases of true and false-positive VDRL-CSF across a 3-year period received by the Immunology and Serology Laboratory, Singapore General Hospital. A biological false-positive VDRL-CSF is defined as a reactive VDRL-CSF with a non-reactive Treponema pallidum particle agglutination (TPPA)-CSF and/or negative Line Immuno Assay (LIA)-CSF IgG. A true-positive VDRL-CSF is a reactive VDRL-CSF with a concordant reactive TPPA-CSF and/or positive LIA-CSF IgG. During the study period, a total of 1254 specimens underwent VDRL-CSF examination. Amongst these, 60 specimens from 53 patients tested positive for VDRL-CSF. Of the 53 patients, 42 (79.2%) were true-positive cases and 11 (20.8%) were false-positive cases. In our setting, a positive non-treponemal serology has 97.6% sensitivity, 100% specificity, 100% positive predictive value and 91.7% negative predictive value for a true-positive VDRL-CSF based on our laboratory definition. HIV seropositivity was an independent predictor of a true-positive VDRL-CSF. Biological false-positive VDRL-CSF is common in a setting where patients are tested without first establishing a serological diagnosis of syphilis. Serological testing should be performed prior to CSF evaluation for neurosyphilis. © 2017 European Academy of Dermatology and Venereology.

  9. Direct sampling of chemical weapons in water by photoionization mass spectrometry.

    PubMed

    Syage, Jack A; Cai, Sheng-Suan; Li, Jianwei; Evans, Matthew D

    2006-05-01

    The vulnerability of water supplies to toxic contamination calls for fast and effective means for screening water samples for multiple threats. We describe the use of photoionization (PI) mass spectrometry (MS) for high-speed, high-throughput screening and molecular identification of chemical weapons (CW) threats and other hazardous compounds. The screening technology can detect a wide range of compounds at subacute concentrations with no sample preparation and a sampling cycle time of approximately 45 s. The technology was tested with CW agents VX, GA, GB, GD, GF, HD, HN1, and HN3, in addition to riot agents and precursors. All are sensitively detected and give simple PI mass spectra dominated by the parent ion. The target application of the PI MS method is as a routine, real-time early warning system for CW agents and other hazardous compounds in air and in water. In this work, we also present comprehensive measurements for water analysis and report on the system detection limits, linearity, quantitation accuracy, and false positive (FP) and false negative rates for concentrations at subacute levels. The latter data are presented in the form of receiver operating characteristic curves of the form of detection probability P(D) versus FP probability P(FP). These measurements were made using the CW surrogate compounds, DMMP, DEMP, DEEP, and DIMP. Method detection limits (3sigma) obtained using a capillary injection method yielded 1, 6, 3, and 2 ng/mL, respectively. These results were obtained using 1-microL injections of water samples without any preparation, corresponding to mass detection limits of 1, 6, 3, and 2 pg, respectively. The linear range was about 3-4 decades and the dynamic range about 4-5 decades. The relative standard deviations were generally <10% at CW subacute concentrations levels.

  10. Real-Time Global Flood Estimation Using Satellite-Based Precipitation and a Coupled Land Surface and Routing Model

    NASA Technical Reports Server (NTRS)

    Wu, Huan; Adler, Robert F.; Tian, Yudong; Huffman, George J.; Li, Hongyi; Wang, JianJian

    2014-01-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50 deg. N - 50 deg. S at relatively high spatial (approximately 12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS, the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is approximately 0.9 and the false alarm ratio is approximately 0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30 deg. S - 30 deg. N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. There were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.

  11. Real-time global flood estimation using satellite-based precipitation and a coupled land surface and routing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Huan; Adler, Robert F.; Tian, Yudong

    2014-03-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50°N–50°S at relatively high spatial (~12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS,more » the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is ~0.9 and the false alarm ratio is ~0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30°S–30°N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. Finally, there were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.« less

  12. False positive circumsporozoite protein ELISA: a challenge for the estimation of the entomological inoculation rate of malaria and for vector incrimination

    PubMed Central

    2011-01-01

    Background The entomological inoculation rate (EIR) is an important indicator in estimating malaria transmission and the impact of vector control. To assess the EIR, the enzyme-linked immunosorbent assay (ELISA) to detect the circumsporozoite protein (CSP) is increasingly used. However, several studies have reported false positive results in this ELISA. The false positive results could lead to an overestimation of the EIR. The aim of present study was to estimate the level of false positivity among different anopheline species in Cambodia and Vietnam and to check for the presence of other parasites that might interact with the anti-CSP monoclonal antibodies. Methods Mosquitoes collected in Cambodia and Vietnam were identified and tested for the presence of sporozoites in head and thorax by using CSP-ELISA. ELISA positive samples were confirmed by a Plasmodium specific PCR. False positive mosquitoes were checked by PCR for the presence of parasites belonging to the Haemosporidia, Trypanosomatidae, Piroplasmida, and Haemogregarines. The heat-stability and the presence of the cross-reacting antigen in the abdomen of the mosquitoes were also checked. Results Specimens (N = 16,160) of seven anopheline species were tested by CSP-ELISA for Plasmodium falciparum and Plasmodium vivax (Pv210 and Pv247). Two new vector species were identified for the region: Anopheles pampanai (P. vivax) and Anopheles barbirostris (Plasmodium malariae). In 88% (155/176) of the mosquitoes found positive with the P. falciparum CSP-ELISA, the presence of Plasmodium sporozoites could not be confirmed by PCR. This percentage was much lower (28% or 5/18) for P. vivax CSP-ELISAs. False positive CSP-ELISA results were associated with zoophilic mosquito species. None of the targeted parasites could be detected in these CSP-ELISA false positive mosquitoes. The ELISA reacting antigen of P. falciparum was heat-stable in CSP-ELISA true positive specimens, but not in the false positives. The heat-unstable cross-reacting antigen is mainly present in head and thorax and almost absent in the abdomens (4 out of 147) of the false positive specimens. Conclusion The CSP-ELISA can considerably overestimate the EIR, particularly for P. falciparum and for zoophilic species. The heat-unstable cross-reacting antigen in false positives remains unknown. Therefore it is highly recommended to confirm all positive CSP-ELISA results, either by re-analysing the heated ELISA lysate (100°C, 10 min), or by performing Plasmodium specific PCR followed if possible by sequencing of the amplicons for Plasmodium species determination. PMID:21767376

  13. 77 FR 48045 - Supplemental Nutrition Assistance Program: Disqualified Recipient Reporting and Computer Matching...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-13

    ... false positive match rate of 10 percent. Making the match mandatory for the States who did not perform... number of prisoners from 1995 to 2013 and assumed a 10 percent false positive match rate. Finally, we... matches are false positives. We estimate that mandatory matches at certification will identify an...

  14. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-06-20

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.

  15. Early detection of the growth of Mycobacterium tuberculosis using magnetophoretic immunoassay in liquid culture.

    PubMed

    Kim, Jeonghyo; Lee, Kil-Soo; Kim, Eun Bee; Paik, Seungwha; Chang, Chulhun L; Park, Tae Jung; Kim, Hwa-Jung; Lee, Jaebeom

    2017-10-15

    Tuberculosis (TB) is an often neglected, epidemic disease that remains to be controlled by contemporary techniques of medicine and biotechnology. In this study, a nanoscale sensing system, referred to as magnetophoretic immunoassay (MPI) was designed to capture culture filtrate protein (CFP)-10 antigens effectively using two different types of nanoparticles (NPs). Two specific monoclonal antibodies against CFP-10 antigen were used, including gold NPs for signaling and magnetic particles for separation. These results were carefully compared with those obtained using the commercial mycobacteria growth indicator tube (MGIT) test via 2 sequential clinical tests (with ca. 260 clinical samples). The sensing linearity of MPI was shown in the range of pico- to micromoles and the detection limit was 0.3pM. MPI using clinical samples shows robust and reliable sensing while monitoring Mycobacterium tuberculosis (MTB) growth with monitoring time 3-10 days) comparable to that with the MGIT test. Furthermore, MPI distinguished false-positive samples from MGIT-positive samples, probably containing non-tuberculous mycobacteria. Thus, MPI shows promise in early TB diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Zika Virus Exposure in an HIV-Infected Cohort in Ghana.

    PubMed

    Sherman, K E; Rouster, S D; Kong, L X; Shata, T M; Archampong, T; Kwara, A; Aliota, M T; Blackard, J T

    2018-04-27

    To determine the prevalence and epidemiologic associations of Zika Virus (ZIKV) in HIV-infected patients in Ghana, West Africa. We examined the seroprevalence of ZIKV in HIV/HBV co-infected persons in Ghana from sera samples collected from 2012 to 2014 using ELISA assays and plaque reduction neutralization tests (PRNT). Overall, ZIKV antibody was detected in 12.9% of 236 tested samples, though the true estimate of exposure is probably less due cross-reactions with other related viruses. PRNTs were performed on a subset to provide an estimate of the frequency of false positive reaction. Dengue virus testing was also performed and antibody prevalence was 87.2%. The median CD4 count was 436 (range 2-1781 cell/mm) and did not affect antibody results. Regional geographic ethnicity was associated with ZIKV exposure. Overall, these data suggest that ZIKV infection is a relatively prevalent infection in HIV-positive persons in Ghana though not as common as dengue. Further evaluation of the effect of ZIKV and HIV co-infection is warranted given the large geographical overlap of populations exposed to both viruses.

  17. Near-infrared counterparts to the Galactic Bulge Survey X-ray source population

    NASA Astrophysics Data System (ADS)

    Greiss, S.; Steeghs, D.; Jonker, P. G.; Torres, M. A. P.; Maccarone, T. J.; Hynes, R. I.; Britt, C. T.; Nelemans, G.; Gänsicke, B. T.

    2014-03-01

    We report on the near-infrared matches, drawn from three surveys, to the 1640 unique X-ray sources detected by Chandra in the Galactic Bulge Survey (GBS). This survey targets faint X-ray sources in the bulge, with a particular focus on accreting compact objects. We present all viable counterpart candidates and associate a false alarm probability (FAP) to each near-infrared match in order to identify the most likely counterparts. The FAP takes into account a statistical study involving a chance alignment test, as well as considering the positional accuracy of the individual X-ray sources. We find that although the star density in the bulge is very high, ˜90 per cent of our sources have an FAP <10 per cent, indicating that for most X-ray sources, viable near-infrared counterparts candidates can be identified. In addition to the FAP, we provide positional and photometric information for candidate counterparts to ˜95 per cent of the GBS X-ray sources. This information in combination with optical photometry, spectroscopy and variability constraints will be crucial to characterize and classify secure counterparts.

  18. Newborn Screening for Glutaric Aciduria-II: The New England Experience.

    PubMed

    Sahai, I; Garganta, C L; Bailey, J; James, P; Levy, H L; Martin, M; Neilan, E; Phornphutkul, C; Sweetser, D A; Zytkovicz, T H; Eaton, R B

    2014-01-01

    Newborn screening (NBS) using tandem mass spectrometry (MS/MS) permits detection of neonates with Glutaric Aciduria-Type II (GA-II). We report follow-up of positive GA-II screens by the New England Newborn Screening Program. 1.5 million infants were screened for GA-II (Feb 1999-Dec 2012). Specialist consult was suggested for infants with two or more acylcarnitine elevations suggestive of GA-II. 82 neonates screened positive for GA-II, 21 weighing > 1.5 kg and 61 weighing ≤ 1.5 kg. Seven (one weighing < 1.5 kg), were confirmed with GA-II. Four of these had the severe form (died < 1 week). The other three have a milder form and were identified because of newborn screening. Two (ages > 5 years) have a G-Tube in place, had multiple hospitalizations and are slightly hypotonic. The third infant remains asymptomatic (9 months old). Two GA-II carriers were also identified. The remaining positive screens were classified as false positives (FP). Six infants (> 1.5 kg) classified as FP had limited diagnostic work-up. Characteristics and outcomes of all specimens and neonates with a positive screen were reviewed, and marker profiles of the cases and FP were compared to identify characteristic profiles. In addition to the severe form of GA-II, milder forms of GA-II and some GA-II carriers are identified by newborn screening. Some positive screens classified as FP may be affected with a milder form of the disorder. Characteristic GA-II profiles, quantified as GA-II indexes, may be utilized to predict probability of disorder and direct urgency of intervention for positive screens.

  19. The use of pre-test and post-test probability values as criteria before selecting patients to undergo coronary angiography in patients who have ischemic findings on myocardial perfusion scintigraphy.

    PubMed

    Karahan Şen, Nazlı Pınar; Bekiş, Recep; Ceylan, Ali; Derebek, Erkan

    2016-07-01

    Myocardial perfusion scintigraphy (MPS) is a diagnostic test which is frequently used in the diagnosis of coronary heart disease (CHD). MPS is generally interpreted as ischemia present or absent; however, it has a power in predicting the disease, similar to other diagnostic tests. In this study, we aimed to assist in directing the high-risk patients to undergo coronary angiography (CA) primarily by evaluating patients without prior CHD history with pre-test and post-test probabilities. The study was designed as a retrospective study. Between January 2008 and July 2011, 139 patients with positive MPS results and followed by CA recently (<6 months) were evaluated from patient files. Patients' pre-test probabilities based on the Diamond and Forrester method and the likelihood ratios that were obtained from the literature were used to calculate the patients' post exercise and post-MPS probabilities. Patients were evaluated in risk groups as low, intermediate, and high, and an ROC curve analysis was performed for the post-MPS probabilities. Coronary artery stenosis (CAS) was determined in 59 patients (42.4%). A significant difference was determined between the risk groups according to CAS, both for the pre-test and post-test probabilities (p<0.001, p=0.024). The ROC analysis provided a cut-off value of 80.4% for post- MPS probability in predicting CAS with 67.9% sensitivity and 77.8% specificity. When the post-MPS probability is ≥80% in patients who have reversible perfusion defects on MPS, we suggest interpreting the MPS as "high probability positive" to improve the selection of true-positive patients to undergo CA, and these patients should be primarily recommended CA.

  20. Statistical evaluation of vibration analysis techniques

    NASA Technical Reports Server (NTRS)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  1. Bladder cancer diagnosis with CT urography: test characteristics and reasons for false-positive and false-negative results.

    PubMed

    Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G

    2018-03-01

    To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.

  2. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  3. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  4. GNSS Signal Authentication Via Power and Distortion Monitoring

    NASA Astrophysics Data System (ADS)

    Wesson, Kyle D.; Gross, Jason N.; Humphreys, Todd E.; Evans, Brian L.

    2018-04-01

    We propose a simple low-cost technique that enables civil Global Positioning System (GPS) receivers and other civil global navigation satellite system (GNSS) receivers to reliably detect carry-off spoofing and jamming. The technique, which we call the Power-Distortion detector, classifies received signals as interference-free, multipath-afflicted, spoofed, or jammed according to observations of received power and correlation function distortion. It does not depend on external hardware or a network connection and can be readily implemented on many receivers via a firmware update. Crucially, the detector can with high probability distinguish low-power spoofing from ordinary multipath. In testing against over 25 high-quality empirical data sets yielding over 900,000 separate detection tests, the detector correctly alarms on all malicious spoofing or jamming attacks while maintaining a <0.6% single-channel false alarm rate.

  5. Shear-wave elastography in the diagnosis of solid breast masses: what leads to false-negative or false-positive results?

    PubMed

    Yoon, Jung Hyun; Jung, Hae Kyoung; Lee, Jong Tae; Ko, Kyung Hee

    2013-09-01

    To investigate the factors that have an effect on false-positive or false-negative shear-wave elastography (SWE) results in solid breast masses. From June to December 2012, 222 breast lesions of 199 consecutive women (mean age: 45.3 ± 10.1 years; range, 21 to 88 years) who had been scheduled for biopsy or surgical excision were included. Greyscale ultrasound and SWE were performed in all women before biopsy. Final ultrasound assessments and SWE parameters (pattern classification and maximum elasticity) were recorded and compared with histopathology results. Patient and lesion factors in the 'true' and 'false' groups were compared. Of the 222 masses, 175 (78.8 %) were benign, and 47 (21.2 %) were malignant. False-positive rates of benign masses were significantly higher than false-negative rates of malignancy in SWE patterns, 36.6 % to 6.4 % (P < 0.001). Among both benign and malignant masses, factors showing significance among false SWE features were lesion size, breast thickness and lesion depth (all P < 0.05). All 47 malignant breast masses had SWE images of good quality. False SWE features were more significantly seen in benign masses. Lesion size, breast thickness and lesion depth have significance in producing false results, and this needs consideration in SWE image acquisition. • Shear-wave elastography (SWE) is widely used during breast imaging • At SWE, false-positive rates were significantly higher than false-negative rates • Larger size, breast thickness, depth and fair quality influences false-positive SWE features • Smaller size, larger breast thickness and depth influences false-negative SWE features.

  6. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  7. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  8. Option volatility and the acceleration Lagrangian

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang

    2014-01-01

    This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.

  9. Reducing false positives of microcalcification detection systems by removal of breast arterial calcifications.

    PubMed

    Mordang, Jan-Jurre; Gubern-Mérida, Albert; den Heeten, Gerard; Karssemeijer, Nico

    2016-04-01

    In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists' detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performance of the CADe system in finding malignant microcalcifications. A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8-1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.

  10. Major Threat to Malaria Control Programs by Plasmodium falciparum Lacking Histidine-Rich Protein 2, Eritrea

    PubMed Central

    Berhane, Araia; Anderson, Karen; Mihreteab, Selam; Gresty, Karryn; Rogier, Eric; Mohamed, Salih; Hagos, Filmon; Embaye, Ghirmay; Chinorumba, Anderson; Zehaie, Assefash; Dowd, Simone; Waters, Norman C.; Gatton, Michelle L.; Udhayakumar, Venkatachalam; Cunningham, Jane

    2018-01-01

    False-negative results for Plasmodium falciparum histidine-rich protein (HRP) 2–based rapid diagnostic tests (RDTs) are increasing in Eritrea. We investigated HRP gene 2/3 (pfhrp2/pfhrp3) status in 50 infected patients at 2 hospitals. We showed that 80.8% (21/26) of patients at Ghindae Hospital and 41.7% (10/24) at Massawa Hospital were infected with pfhrp2-negative parasites and 92.3% (24/26) of patients at Ghindae Hospital and 70.8% (17/24) at Massawa Hospital were infected with pfhrp3-negative parasites. Parasite densities between pfhrp2-positive and pfhrp2-negative patients were comparable. All pfhrp2-negative samples had no detectable HRP2/3 antigen and showed negative results for HRP2-based RDTs. pfhrp2-negative parasites were genetically less diverse and formed 2 clusters with no close relationships to parasites from Peru. These parasites probably emerged independently by selection in Eritrea. High prevalence of pfhrp2-negative parasites caused a high rate of false-negative results for RDTs. Determining prevalence of pfhrp2-negative parasites is urgently needed in neighboring countries to assist case management policies. PMID:29460730

  11. Contour-Based Corner Detection and Classification by Using Mean Projection Transform

    PubMed Central

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-01-01

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354

  12. Contour-based corner detection and classification by using mean projection transform.

    PubMed

    Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein

    2014-02-28

    Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.

  13. Scaling beta-delayed neutron measurements to large detector areas

    NASA Astrophysics Data System (ADS)

    Sutanto, F.; Nattress, J.; Jovanovic, I.

    2017-08-01

    We explore the performance of a cargo screening system that consists of two large-sized composite scintillation detectors and a high-energy neutron interrogation source by modeling and simulation. The goal of the system is to measure β-delayed neutron emission from an illicit special nuclear material by use of active interrogation. This task is challenging because the β-delayed neutron yield is small in comparison with the yield of the prompt fission secondary products, β-delayed neutrons are emitted with relatively low energies, and high neutron and gamma backgrounds are typically present. Detectors used to measure delayed neutron emission must exhibit high intrinsic efficiency and cover a large solid angle, which also makes them sensitive to background neutron radiation. We present a case study where we attempt to detect the presence of 5 kg-scale quantities of 235U in a standard air-filled cargo container using 14 MeV neutrons as a probe. We find that by using a total measurement time of ˜11.6 s and a dose equivalent of ˜1.7 mrem, the presence of 235U can be detected with false positive and false negative probabilities that are both no larger than 0.1%.

  14. Major Threat to Malaria Control Programs by Plasmodium falciparum Lacking Histidine-Rich Protein 2, Eritrea.

    PubMed

    Berhane, Araia; Anderson, Karen; Mihreteab, Selam; Gresty, Karryn; Rogier, Eric; Mohamed, Salih; Hagos, Filmon; Embaye, Ghirmay; Chinorumba, Anderson; Zehaie, Assefash; Dowd, Simone; Waters, Norman C; Gatton, Michelle L; Udhayakumar, Venkatachalam; Cheng, Qin; Cunningham, Jane

    2018-03-01

    False-negative results for Plasmodium falciparum histidine-rich protein (HRP) 2-based rapid diagnostic tests (RDTs) are increasing in Eritrea. We investigated HRP gene 2/3 (pfhrp2/pfhrp3) status in 50 infected patients at 2 hospitals. We showed that 80.8% (21/26) of patients at Ghindae Hospital and 41.7% (10/24) at Massawa Hospital were infected with pfhrp2-negative parasites and 92.3% (24/26) of patients at Ghindae Hospital and 70.8% (17/24) at Massawa Hospital were infected with pfhrp3-negative parasites. Parasite densities between pfhrp2-positive and pfhrp2-negative patients were comparable. All pfhrp2-negative samples had no detectable HRP2/3 antigen and showed negative results for HRP2-based RDTs. pfhrp2-negative parasites were genetically less diverse and formed 2 clusters with no close relationships to parasites from Peru. These parasites probably emerged independently by selection in Eritrea. High prevalence of pfhrp2-negative parasites caused a high rate of false-negative results for RDTs. Determining prevalence of pfhrp2-negative parasites is urgently needed in neighboring countries to assist case management policies.

  15. Knot probability of polygons subjected to a force: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Orlandini, E.; Tesi, M. C.; Whittington, S. G.

    2008-01-01

    We use Monte Carlo methods to study the knot probability of lattice polygons on the cubic lattice in the presence of an external force f. The force is coupled to the span of the polygons along a lattice direction, say the z-direction. If the force is negative polygons are squeezed (the compressive regime), while positive forces tend to stretch the polygons along the z-direction (the tensile regime). For sufficiently large positive forces we verify that the Pincus scaling law in the force-extension curve holds. At a fixed number of edges n the knot probability is a decreasing function of the force. For a fixed force the knot probability approaches unity as 1 - exp(-α0(f)n + o(n)), where α0(f) is positive and a decreasing function of f. We also examine the average of the absolute value of the writhe and we verify the square root growth law (known for f = 0) for all values of f.

  16. Effects of depressive disorder on false memory for emotional information.

    PubMed

    Yeh, Zai-Ting; Hua, Mau-Sun

    2009-01-01

    This study explored with a false memory paradigm whether (1) depressed patients revealed more false memories and (2) whether more negative false than positive false recognition existed in subjects with depressive disorders. Thirty-two patients suffering from a major depressive episode (DSM-IV criteria), and 30 age- and education-matched normal control subjects participated in this study. After the presentation of a list of positive, negative, and neutral association items in the learning phase, subjects were asked to give a yes/no response in the recognition phase. They were also asked to rate 81 recognition items with emotional valence scores. The results revealed more negative false memories in the clinical depression group than in the normal control group; however, we did not find more negative false memories than positive ones in patients. When compared with the normal group, a more conservative response criterion for positive items was evident in patient groups. It was also found that when compared with the normal group, the subjects in the depression group perceived the positive items as less positive. On the basis of present results, it is suggested that depressed subjects judged the emotional information with criteria different from normal individuals, and patients' emotional memory intensity is attenuated by their mood.

  17. Commercial radioimmunoassay for beta subunit of human chorionic gonadotropin: falsely positive determinations due to elevated serum luteinizing hormone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, J.E. Jr.; Platoff, G.E.; Kubrock, C.A.

    1982-01-01

    Among 17 men who had received seemingly curative treatment for unilateral non-seminomatous germ cell tumors for the testis and who had consistently normal serum human chorionic gonadotropin (HCG) levels at a reference laboratory, 7 (41%) had at least one falsely positive commercial serum HCG determination. To investigate the cause of these falsely positive determinations the authors measured the cross reactivity of luteinizing hormone (LH) and follicle stimulating hormone (FSH) standards in the commercial HCG assay, and studied the relationships between commercial HCG levels and serum LH levels, serum FSH levels and gonadal status in men with and without normal gonadalmore » function. The falsely positive HCG determinations appeared to be due to elevated serum LH levels and cross reactivity of LH in the commercial HCG assay because: 1) there was substantial cross reactivity of the LH standards in the commercial assay, 2) the serum LH was elevated in four of six men with solitary testes, 3) there was a striking correlation between elevated serum LH levels and falsely elevated commercial HCG levels in ten men with solitary or absent testes, and 4) there were no falsely positive HCG determinations in 13 normal men but there were falsely positive HCG determinations in seven of ten anorchid men.« less

  18. The efficacy and cost of alternative strategies for systematic screening for type 2 diabetes in the U.S. population 45-74 years of age.

    PubMed

    Johnson, Susan L; Tabaei, Bahman P; Herman, William H

    2005-02-01

    To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.

  19. Breast cancer detection risk in screening mammography after a false-positive result.

    PubMed

    Castells, X; Román, M; Romero, A; Blanch, J; Zubizarreta, R; Ascunce, N; Salas, D; Burón, A; Sala, M

    2013-02-01

    False-positives are a major concern in breast cancer screening. However, false-positives have been little evaluated as a prognostic factor for cancer detection. Our aim was to evaluate the association of false-positive results with the cancer detection risk in subsequent screening participations over a 17-year period. This is a retrospective cohort study of 762,506 women aged 45-69 years, with at least two screening participations, who underwent 2,594,146 screening mammograms from 1990 to 2006. Multilevel discrete-time hazard models were used to estimate the adjusted odds ratios (OR) of breast cancer detection in subsequent screening participations in women with false-positive results. False-positives involving a fine-needle aspiration cytology or a biopsy had a higher cancer detection risk than those involving additional imaging procedures alone (OR = 2.69; 95%CI: 2.28-3.16 and OR = 1.81; 95%CI: 1.70-1.94, respectively). The risk of cancer detection increased substantially if women with cytology or biopsy had a familial history of breast cancer (OR = 4.64; 95%CI: 3.23-6.66). Other factors associated with an increased cancer detection risk were age 65-69 years (OR = 1.84; 95%CI: 1.67-2.03), non-attendance at the previous screening invitation (OR = 1.26; 95%CI: 1.11-1.43), and having undergone a previous benign biopsy outside the screening program (OR = 1.24; 95%CI: 1.13-1.35). Women with a false-positive test have an increased risk of cancer detection in subsequent screening participations, especially those with a false-positive result involving cytology or biopsy. Understanding the factors behind this association could provide valuable information to increase the effectiveness of breast cancer screening. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Psychological distress in U.S. women who have experienced false-positive mammograms.

    PubMed

    Jatoi, Ismail; Zhu, Kangmin; Shah, Mona; Lawrence, William

    2006-11-01

    In the United States, approximately 10.7% of all screening mammograms lead to a false-positive result, but the overall impact of false-positives on psychological well-being is poorly understood. Data were analyzed from the 2000 U.S. National Health Interview Survey (NHIS), the most recent national survey that included a cancer control module. Study subjects were 9,755 women who ever had a mammogram, of which 1,450 had experienced a false-positive result. Psychological distress was assessed using the validated K6 questionnaire and logistic regression was used to discern any association with previous false-positive mammograms. In a multivariate analysis, women who had indicated a previous false-positive mammogram were more likely to report feeling sad (OR = 1.18, 95% CI, 1.03-1.35), restless (OR = 1.23, 95% CI, 1.08-1.40), worthless (OR = 1.27, 95% CI, 1.04-1.54), and finding that everything was an effort (OR = 1.27, 95% CI, 1.10-1.47). These women were also more likely to have seen a mental health professional in the 12 months preceding the survey (OR = 1.28, 95% CI, 1.03-1.58) and had a higher composite score on all items of the K6 scale (P < 0.0001), a reflection of increased psychological distress. Analyses by age and race revealed that, among women who had experienced false-positives, younger women were more likely to feel that everything was an effort, and blacks were more likely to feel restless. In a random sampling of the U.S. population, women who had previously experienced false-positive mammograms were more likely to report symptoms of anxiety and depression.

  1. Updating: Learning versus Supposing

    ERIC Educational Resources Information Center

    Zhao, Jiaying; Crupi, Vincenzo; Tentori, Katya; Fitelson, Branden; Osherson, Daniel

    2012-01-01

    Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event "A" after learning "B" should equal the conditional probability of "A" given "B" prior to learning "B". We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial…

  2. The Inverse Contagion Problem (ICP) vs.. Predicting site contagion in real time, when network links are not observable

    NASA Astrophysics Data System (ADS)

    Mushkin, I.; Solomon, S.

    2017-10-01

    We study the inverse contagion problem (ICP). As opposed to the direct contagion problem, in which the network structure is known and the question is when each node will be contaminated, in the inverse problem the links of the network are unknown but a sequence of contagion histories (the times when each node was contaminated) is observed. We consider two versions of the ICP: The strong problem (SICP), which is the reconstruction of the network and has been studied before, and the weak problem (WICP), which requires "only" the prediction (at each time step) of the nodes that will be contaminated at the next time step (this is often the real life situation in which a contagion is observed and predictions are made in real time). Moreover, our focus is on analyzing the increasing accuracy of the solution, as a function of the number of contagion histories already observed. For simplicity, we discuss the simplest (deterministic and synchronous) contagion dynamics and the simplest solution algorithm, which we have applied to different network types. The main result of this paper is that the complex problem of the convergence of the ICP for a network can be reduced to an individual property of pairs of nodes: the "false link difficulty". By definition, given a pair of unlinked nodes i and j, the difficulty of the false link (i,j) is the probability that in a random contagion history, the nodes i and j are not contaminated at the same time step (or at consecutive time steps). In other words, the "false link difficulty" of a non-existing network link is the probability that the observations during a random contagion history would not rule out that link. This probability is relatively straightforward to calculate, and in most instances relies only on the relative positions of the two nodes (i,j) and not on the entire network structure. We have observed the distribution of false link difficulty for various network types, estimated it theoretically and confronted it (successfully) with the numerical simulations. Based on it, we estimated analytically the convergence of the ICP solution (as a function of the number of contagion histories observed), and found it to be in perfect agreement with simulation results. Finally, the most important insight we obtained is that SICP and WICP are have quite different properties: if one in interested only in the operational aspect of predicting how contagion will spread, the links which are most difficult to decide about are the least influential on contagion dynamics. In other words, the parts of the network which are harder to reconstruct are also least important for predicting the contagion dynamics, up to the point where a (large) constant number of false links in the network (i.e. non-convergence of the network reconstruction procedure) implies a zero rate of the node contagion prediction errors (perfect convergence of the WICP). Thus, the contagion prediction problem (WICP) difficulty is very different from the network reconstruction problem (SICP), in as far as links which are difficult to reconstruct are quite harmless in terms of contagion prediction capability (WICP).

  3. How to limit false positives in environmental DNA and metabarcoding?

    PubMed

    Ficetola, Gentile Francesco; Taberlet, Pierre; Coissac, Eric

    2016-05-01

    Environmental DNA (eDNA) and metabarcoding are boosting our ability to acquire data on species distribution in a variety of ecosystems. Nevertheless, as most of sampling approaches, eDNA is not perfect. It can fail to detect species that are actually present, and even false positives are possible: a species may be apparently detected in areas where it is actually absent. Controlling false positives remains a main challenge for eDNA analyses: in this issue of Molecular Ecology Resources, Lahoz-Monfort et al. () test the performance of multiple statistical modelling approaches to estimate the rate of detection and false positives from eDNA data. Here, we discuss the importance of controlling for false detection from early steps of eDNA analyses (laboratory, bioinformatics), to improve the quality of results and allow an efficient use of the site occupancy-detection modelling (SODM) framework for limiting false presences in eDNA analysis. © 2016 John Wiley & Sons Ltd.

  4. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  5. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  6. Unusual positional effects on flower sex in an andromonoecious tree: Resource competition, architectural constraints, or inhibition by the apical flower?

    PubMed

    Granado-Yela, Carlos; Balaguer, Luis; Cayuela, Luis; Méndez, Marcos

    2017-04-01

    Two, nonmutually exclusive, mechanisms-competition for resources and architectural constraints-have been proposed to explain the proximal to distal decline in flower size, mass, and/or femaleness in indeterminate, elongate inflorescences. Whether these mechanisms also explain unusual positional effects such as distal to proximal declines of floral performance in determinate inflorescences, is understudied. We tested the relative influence of these mechanisms in the andromonoecious wild olive tree, where hermaphroditic flowers occur mainly on apical and the most proximal positions in determinate inflorescences. We experimentally increased the availability of resources for the inflorescences by removing half of the inflorescences per twig or reduced resource availability by removing leaves. We also removed the apical flower to test its inhibitory effect on subapical flowers. The apical flower had the highest probability of being hermaphroditic. Further down, however, the probability of finding a hermaphroditic flower decreased from the base to the tip of the inflorescences. An experimental increase of resources increased the probability of finding hermaphroditic flowers at each position, and vice versa. Removal of the apical flower increased the probability of producing hermaphroditic flowers in proximal positions but not in subapical positions. These results indicate an interaction between resource competition and architectural constraints in influencing the arrangement of the hermaphroditic and male flowers within the inflorescences of the wild olive tree. Subapical flowers did not seem to be hormonally suppressed by apical flowers. The study of these unusual positional effects is needed for a general understanding about the functional implications of inflorescence architecture. © 2017 Botanical Society of America.

  7. Evaluation of MTANNs for eliminating false-positive with different computer aided pulmonary nodules detection software.

    PubMed

    Shi, Zhenghao; Ma, Jiejue; Feng, Yaning; He, Lifeng; Suzuki, Kenji

    2015-11-01

    MTANN (Massive Training Artificial Neural Network) is a promising tool, which applied to eliminate false-positive for thoracic CT in recent years. In order to evaluate whether this method is feasible to eliminate false-positive of different CAD schemes, especially, when it is applied to commercial CAD software, this paper evaluate the performance of the method for eliminating false-positives produced by three different versions of commercial CAD software for lung nodules detection in chest radiographs. Experimental results demonstrate that the approach is useful in reducing FPs for different computer aided lung nodules detection software in chest radiographs.

  8. Ellipticity angle of electromagnetic signals and its use for non-energetic detection optimal by the Neumann-Pearson criterion

    NASA Astrophysics Data System (ADS)

    Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.

    2012-08-01

    An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.

  9. QPCR detection of Mucorales DNA in bronchoalveolar lavage fluid to diagnose pulmonary mucormycosis.

    PubMed

    Scherer, Emeline E; Iriart, Xavier; Bellanger, Anne Pauline; Dupont, Damien; Guitard, Juliette; Gabriel, Frederic; Cassaing, Sophie; Charpentier, Eléna; Guenounou, Sarah; Cornet, Murielle; Botterel, Françoise; Rocchi, Steffi; Berceanu, Ana; Millon, Laurence

    2018-06-06

    Early diagnosis and treatment are essential to improving the outcome of mucormycosis. The aim of this retrospective study was to assess the contribution of quantitative PCR detection of Mucorales DNA in bronchoalveolar lavage fluids for early diagnosis of pulmonary mucormycosis.Bronchoalveolar lavage fluids (n=450) from 374 patients with pneumonia and immunosuppressive conditions were analyzed using a combination of 3 quantitative PCR assays targeting the main genera involved in mucormycosis in France ( Rhizomucor, Mucor/Rhizopus, Lichtheimia ).Among these 374 patients, 24 had at least one bronchoalveolar lavage with a positive PCR; 23/24 patients had radiological criteria for invasive fungal infections according to consensual criteria : 10 patients with probable or proven mucormycosis, and 13 additional patients with other invasive fungal infections (4 probable aspergillosis, 1 proven fusariosis, and 8 possible invasive fungal infections). Only 2/24 patients with a positive PCR on bronchoalveolar lavage had a positive Mucorales culture.PCR was also positive on serum in 17/24 patients. In most cases, PCR was first detected positive on sera (15/17). However, a positive PCR on bronchoalveolar lavage was the earliest and/or the only biological test revealing mucormycosis in 4 patients with a final diagnosis of probable or proven mucormycosis, 3 patients with probable aspergillosis and one patient with a possible invasive fungal infection.Mucorales PCR performed on bronchoalveolar lavage could provide additional arguments for earlier administration of Mucorales-directed antifungal therapy, thus improving the outcome of lung mucormycosis. Copyright © 2018 American Society for Microbiology.

  10. A new method for ultrasound detection of interfacial position in gas-liquid two-phase flow.

    PubMed

    Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Neves, Flávio; Morales, Rigoberto E M

    2014-05-22

    Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe.

  11. A New Method for Ultrasound Detection of Interfacial Position in Gas-Liquid Two-Phase Flow

    PubMed Central

    Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Jr., Flávio Neves; Morales, Rigoberto E. M.

    2014-01-01

    Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe. PMID:24858961

  12. The search for human pheromones: the lost decades and the necessity of returning to first principles

    PubMed Central

    Wyatt, Tristram D.

    2015-01-01

    As humans are mammals, it is possible, perhaps even probable, that we have pheromones. However, there is no robust bioassay-led evidence for the widely published claims that four steroid molecules are human pheromones: androstenone, androstenol, androstadienone and estratetraenol. In the absence of sound reasons to test the molecules, positive results in studies need to be treated with scepticism as these are highly likely to be false positives. Common problems include small sample sizes, an overestimate of effect size (as no effect can be expected), positive publication bias and lack of replication. Instead, if we are to find human pheromones, we need to treat ourselves as if we were a newly discovered mammal, and use the rigorous methods already proven successful in pheromone research on other species. Establishing a pheromone relies on demonstration of an odour-mediated behavioural or physiological response, identification and synthesis of the bioactive molecule(s), followed by bioassay confirmation of activity. Likely sources include our sebaceous glands. Comparison of secretions from adult and pre-pubertal humans may highlight potential molecules involved in sexual behaviour. One of the most promising human pheromone leads is a nipple secretion from the areola glands produced by all lactating mothers, which stimulates suckling by any baby not just their own. PMID:25740891

  13. X1908+075: An X-Ray Binary with a 4.4 Day Period

    NASA Astrophysics Data System (ADS)

    Wen, Linqing; Remillard, Ronald A.; Bradt, Hale V.

    2000-04-01

    X1908+075 is an optically unidentified and highly absorbed X-ray source that appeared in early surveys such as Uhuru, OSO 7, Ariel 5, HEAO-1, and the EXOSAT Galactic Plane Survey. These surveys measured a source intensity in the range 2-12 mcrab at 2-10 keV, and the position was localized to ~0.5d. We use the Rossi X-Ray Timing Explorer (RXTE) All-Sky Monitor (ASM) to confirm our expectation that a particular Einstein/IPC detection (1E 1908.4+0730) provides the correct position for X1908+075. The analysis of the coded mask shadows from the ASM for the position of 1E 1908.4+0730 yields a persistent intensity ~8 mcrab (1.5-12 keV) over a 3 yr interval beginning in 1996 February. Furthermore, we detect a period of 4.400+/-0.001 days with a false-alarm probability less than 10-7. The folded light curve is roughly sinusoidal, with an amplitude that is 26% of the mean flux. The X-ray period may be attributed to the scattering and absorption of X-rays through a stellar wind combined with the orbital motion in a binary system. We suggest that X1908+075 is an X-ray binary with a high-mass companion star.

  14. Detection of cat-eye effect echo based on unit APD

    NASA Astrophysics Data System (ADS)

    Wu, Dong-Sheng; Zhang, Peng; Hu, Wen-Gang; Ying, Jia-Ju; Liu, Jie

    2016-10-01

    The cat-eye effect echo of optical system can be detected based on CCD, but the detection range is limited within several kilometers. In order to achieve long-range even ultra-long-range detection, it ought to select APD as detector because of the high sensitivity of APD. The detection system of cat-eye effect echo based on unit APD is designed in paper. The implementation scheme and key technology of the detection system is presented. The detection performances of the detection system including detection range, detection probability and false alarm probability are modeled. Based on the model, the performances of the detection system are analyzed using typical parameters. The results of numerical calculation show that the echo signal-to-noise ratio is greater than six, the detection probability is greater than 99.9% and the false alarm probability is less tan 0.1% within 20 km detection range. In order to verify the detection effect, we built the experimental platform of detection system according to the design scheme and carry out the field experiments. The experimental results agree well with the results of numerical calculation, which prove that the detection system based on the unit APD is feasible to realize remote detection for cat-eye effect echo.

  15. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  16. Computerized mass detection in whole breast ultrasound images: reduction of false positives using bilateral subtraction technique

    NASA Astrophysics Data System (ADS)

    Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako

    2007-03-01

    The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.

  17. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    NASA Technical Reports Server (NTRS)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  18. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

    PubMed Central

    Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

  19. Evaluation of fecal elastase and serum cholecystokinin in dogs with a false positive fecal elastase test.

    PubMed

    Steiner, J M; Rehfeld, J F; Pantchev, N

    2010-01-01

    An assay for the measurement of pancreatic elastase in dog feces has been introduced. The goal of this study was to evaluate the rate of false-positive fecal-elastase test results in dogs with suspected exocrine pancreatic insufficiency (EPI) and to assess serum cholecystokinin (CCK) concentrations in dogs with a false positive fecal elastase test result. Twenty-six fecal and serum samples from dogs suspected of EPI, for which samples had been submitted to a commercial laboratory (Vet Med Labor) for analysis. Prospective study. Serum trypsin-like immunoreactivity (TLI) was measured in 26 dogs with a decreased fecal elastase concentration of <10 microg/g feces. Serum CCK concentrations were measured in 21 of these dogs. Of 26 dogs with a decreased fecal elastase concentration, 6 (23%) had serum TLI concentrations within or above the reference range. Serum CCK concentrations were significantly higher in dogs with a true positive fecal elastase test result (median: 1.1 pmol/L; range: 0.1-3.3 pmol/L) than in those with a false positive fecal elastase test result (median: 0.1 pmol/L; range: 0.1-0.9 pmol/L; P value = .0163). The rate of false positive fecal elastase test results was high in this group of dogs, suggesting that diagnosis of EPI must be confirmed by other means. The decreased CCK concentration in dogs with a false positive fecal elastase test result could suggest that false positive results are because of decreased stimulation of exocrine pancreatic function caused by other conditions.

  20. Brain mechanisms of emotions.

    PubMed

    Simonov, P V

    1997-01-01

    At the 23rd International Congress of Physiology Sciences (Tokyo, 1965) the results of experiment led us to the conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is, to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion, and estimation of probability have different neuromorphological substrates. Activation through the hypothalamic motivatiogenic structures of the frontal parts of the neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which are typical for the emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams, the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons, while in the course of avoidance reaction the positive emotion's neurons were involved. The role of the left and right frontal neocortex in the appearance or positive or negative emotions depends on these informational (cognitive) functions.

  1. [The brain mechanisms of emotions].

    PubMed

    Simonov, P V

    1997-01-01

    At the 23rd International Congress of Physiological Sciences (Tokyo, 1965) the results of experiment brought us to a conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion and estimation of probability have different neuromorphological substrate. Activating by motivatiogenic structures of the hypothalamus the frontal parts of neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which is typical for emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons while in the course of avoidance reaction the positive emotion's neurons being involved. The role of the left and right frontal neocortex in the appearance of positive or negative emotions depends on this informational (cognitive) functions.

  2. Method- and species-specific detection probabilities of fish occupancy in Arctic lakes: Implications for design and management

    USGS Publications Warehouse

    Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.

    2013-01-01

    Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.

  3. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850

  4. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-11-25

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.

  5. Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.

    PubMed

    Jensen, P S; Watanabe, H

    1999-02-01

    To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.

  6. Retrospective imaging study on the diagnosis of pathological false positive iodine-131 scans in patients with thyroid cancer.

    PubMed

    Jia, Qiang; Meng, Zhaowei; Tan, Jian; Zhang, Guizhi; He, Yajing; Sun, Haoran; Yu, Chunshui; Li, Dong; Zheng, Wei; Wang, Renfei; Wang, Shen; Li, Xue; Zhang, Jianping; Hu, Tianpeng; Liu, N A; Upadhyaya, Arun

    2015-11-01

    Iodine-131 (I-131) therapy and post-therapy I-131 scanning are essential in the management of differentiated thyroid cancer (DTC). However, pathological false positive I-131 scans can lead to misdiagnosis and inappropriate I-131 treatment. This retrospective study aimed to investigate the best imaging modality for the diagnosis of pathological false positive I-131 scans in a DTC patient cohort, and to determine its incidence. DTC patient data archived from January 2008 to January 2010 was retrieved. Post-therapeutic I-131 scans were conducted and interpreted. The imaging modalities of magnetic resonance imaging (MRI), computed tomography and ultrasonography were applied and compared to check all suspected lesions. Biopsy or needle aspiration was conducted for patients who consented to the acquisition of histopathological confirmation. Data for 156 DTC patients were retrieved. Only 6 cases of pathological false-positives were found among these (incidence, 3.85%), which included 3 cases of thymic hyperplasia in the mediastinum, 1 case of pleomorphic adenoma in the parapharyngeal space and 1 case of thyroglossal duct cyst in the neck. MRI was demonstrated as the best imaging modality for diagnosis due to its superior soft tissue resolution. However, no imaging modality was able to identify the abdominal false positive-lesions observed in 2 cases, one of whom also had thymic hyperplasia. In conclusion, pathological false positive I-131 scans occurred with an incidence of 3.85%. MRI was the best imaging modality for diagnosing these pathological false-positives.

  7. Variation in false-positive rates of mammography reading among 1067 radiologists: a population-based assessment.

    PubMed

    Tan, Alai; Freeman, Daniel H; Goodwin, James S; Freeman, Jean L

    2006-12-01

    The accuracy of mammography reading varies among radiologists. We conducted a population-based assessment on radiologist variation in false- positive rates of screening mammography and its associated radiologist characteristics. About 27,394 screening mammograms interpreted by 1067 radiologists were identified from a 5% non-cancer sample of Medicare claims during 1998-1999. The data were linked to the American Medical Association Masterfile to obtain radiologist characteristics. Multilevel logistic regression models were used to examine the radiologist variation in false-positive rates of screening mammography and the associated radiologist characteristics. Radiologists varied substantially in the false-positive rates of screening mammography (ranging from 1.5 to 24.1%, adjusting for patient characteristics). A longer time period since graduation is associated with lower false-positive rates (odds ratio [OR] for every 10 years increase: 0.87, 95% Confidence Interval [CI], 0.81-0.94) and female radiologists had higher false-positive rates than male radiologists (OR = 1.25, 95% CI, 1.05-1.49), adjusting for patient and other radiologist characteristics. The unmeasured factors contributed to about 90% of the between-radiologist variance. Radiologists varied greatly in accuracy of mammography reading. Female and more recently trained radiologists had higher false-positive rates. The variation among radiologists was largely due to unmeasured factors, especially unmeasured radiologist factors. If our results are confirmed in further studies, they suggest that system-level interventions would be required to reduce variation in mammography interpretation.

  8. Is it time to sound an alarm about false-positive cell-free DNA testing for fetal aneuploidy?

    PubMed

    Mennuti, Michael T; Cherry, Athena M; Morrissette, Jennifer J D; Dugoff, Lorraine

    2013-11-01

    Testing cell-free DNA (cfDNA) in maternal blood samples has been shown to have very high sensitivity for the detection of fetal aneuploidy with very low false-positive results in high-risk patients who undergo invasive prenatal diagnosis. Recent observation in clinical practice of several cases of positive cfDNA tests for trisomy 18 and trisomy 13, which were not confirmed by cytogenetic testing of the pregnancy, may reflect a limitation of the positive predictive value of this quantitative testing, particularly when it is used to detect rare aneuploidies. Analysis of a larger number of false-positive cases is needed to evaluate whether these observations reflect the positive predictive value that should be expected. Infrequently, mechanisms (such as low percentage mosaicism or confined placental mosaicism) might also lead to positive cfDNA testing that is not concordant with standard prenatal cytogenetic diagnosis. The need to explore these and other possible causes of false-positive cfDNA testing is exemplified by 2 of these cases. Additional evaluation of cfDNA testing in clinical practice and a mechanism for the systematic reporting of false-positive and false-negative cases will be important before this test is offered widely to the general population of low-risk obstetric patients. In the meantime, incorporating information about the positive predictive value in pretest counseling and in clinical laboratory reports is recommended. These experiences reinforce the importance of offering invasive testing to confirm cfDNA results before parental decision-making. Copyright © 2013 Mosby, Inc. All rights reserved.

  9. Working memory affects false memory production for emotional events.

    PubMed

    Mirandola, Chiara; Toffalini, Enrico; Ciriello, Alfonso; Cornoldi, Cesare

    2017-01-01

    Whereas a link between working memory (WM) and memory distortions has been demonstrated, its influence on emotional false memories is unclear. In two experiments, a verbal WM task and a false memory paradigm for negative, positive or neutral events were employed. In Experiment 1, we investigated individual differences in verbal WM and found that the interaction between valence and WM predicted false recognition, with negative and positive material protecting high WM individuals against false remembering; the beneficial effect of negative material disappeared in low WM participants. In Experiment 2, we lowered the WM capacity of half of the participants with a double task request, which led to an overall increase in false memories; furthermore, consistent with Experiment 1, the increase in negative false memories was larger than that of neutral or positive ones. It is concluded that WM plays a critical role in determining false memory production, specifically influencing the processing of negative material.

  10. Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.

    PubMed

    Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu

    2018-02-01

    Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.

  11. Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu

    2018-02-01

    Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.

  12. Evaluation of positive and false-positive results in syphilis screening of blood donors in Rio de Janeiro, Brazil.

    PubMed

    Sandes, V S; Silva, S G C; Motta, I J F; Velarde, L G C; de Castilho, S R

    2017-06-01

    We propose to analyse the positive and false-positive results of treponemal and nontreponemal tests in blood donors from Brazil and to evaluate possible factors associated with the results of treponemal tests. Treponemal tests have been used widely for syphilis screening in blood banks. The introduction of these tests in donor screening has caused an impact and a loss of donors who need to be assessed. This was a retrospective cross-sectional study of syphilis screening and confirmatory test results of blood donors that were obtained before and after adopting a chemiluminescent immunoassay (CLIA). A comparative analysis was performed using a second sample drawn from positive donors. The possible factors associated with CLIA-positive or CLIA-false-positive results were investigated in a subgroup. Statistical tests were used to compare the proportions and adjusted estimates of association. The reactivity rate increased from 1·01% (N = 28 158) to 2·66% (N = 25 577) after introducing the new test. Among Venereal Disease Research Laboratory (VDRL)- and CLIA-confirmed results, the false-positive rates were 40·5% (N = 180) and 37·4% (N = 359), respectively (P = 0·5266). Older donors (OR = 1·04; P = 0·0010) and donors with lower education levels (OR = 6·59; P = 0·0029) were associated with a higher risk of positivity for syphilis. CLIA represents an improvement in blood bank serological screening. However, its use in a healthy population appears to result in high rates of false positives. Identifying which characteristics can predict false positives, however, remains a challenge. © 2017 British Blood Transfusion Society.

  13. The Impact of Repeat HIV Testing on Risky Sexual Behavior: Evidence from a Randomized Controlled Trial in Malawi

    PubMed Central

    Delavande, Adeline; Wagner, Zachary; Sood, Neeraj

    2016-01-01

    A significant proportion of HIV-positive adults in sub-Saharan Africa are in serodiscordant relationships. Identification of such serodiscordant couples through couple HIV testing and counseling (HTC) is thought to promote safe sexual behavior and reduce the probability of within couple seroconversion. However, it is possible HTC benefits are not sustained over time and therefore repeated HTC may be more effective at preventing seroconversion than one time HTC. We tested this theory in Zomba, Malawi by randomly assigning 170 serodiscordant couples to receive repeated HTC and 167 serodiscordant couples to receive one time HTC upon study enrollment (control group). We used linear probability models and probit model with couple fixed effects to assess the impact of the intervention on risky sexual behavior. At one-year follow-up, we found that couples that received repeated HTC reported significantly more condom use. However, we found no difference in rate of seroconversion between groups, nor did we find differences in subjective expectations about seroconversion or false beliefs about HIV, two expected pathways of behavior change. We conclude that repeated HTC may promote safe sexual behavior, but this result should be interpreted with caution, as it is inconsistent with the result from biological and subjective outcomes. PMID:27158553

  14. The Impact of Repeat HIV Testing on Risky Sexual Behavior: Evidence from a Randomized Controlled Trial in Malawi.

    PubMed

    Delavande, Adeline; Wagner, Zachary; Sood, Neeraj

    2016-03-01

    A significant proportion of HIV-positive adults in sub-Saharan Africa are in serodiscordant relationships. Identification of such serodiscordant couples through couple HIV testing and counseling (HTC) is thought to promote safe sexual behavior and reduce the probability of within couple seroconversion. However, it is possible HTC benefits are not sustained over time and therefore repeated HTC may be more effective at preventing seroconversion than one time HTC. We tested this theory in Zomba, Malawi by randomly assigning 170 serodiscordant couples to receive repeated HTC and 167 serodiscordant couples to receive one time HTC upon study enrollment (control group). We used linear probability models and probit model with couple fixed effects to assess the impact of the intervention on risky sexual behavior. At one-year follow-up, we found that couples that received repeated HTC reported significantly more condom use. However, we found no difference in rate of seroconversion between groups, nor did we find differences in subjective expectations about seroconversion or false beliefs about HIV, two expected pathways of behavior change. We conclude that repeated HTC may promote safe sexual behavior, but this result should be interpreted with caution, as it is inconsistent with the result from biological and subjective outcomes.

  15. Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection

    NASA Astrophysics Data System (ADS)

    Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.

    2018-02-01

    Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.

  16. Contrast model for three-dimensional vehicles in natural lighting and search performance analysis

    NASA Astrophysics Data System (ADS)

    Witus, Gary; Gerhart, Grant R.; Ellis, R. Darin

    2001-09-01

    Ground vehicles in natural lighting tend to have significant and systematic variation in luminance through the presented area. This arises, in large part, from the vehicle surfaces having different orientations and shadowing relative to the source of illumination and the position of the observer. These systematic differences create the appearance of a structured 3D object. The 3D appearance is an important factor in search, figure-ground segregation, and object recognition. We present a contrast metric to predict search and detection performance that accounts for the 3D structure. The approach first computes the contrast of the front (or rear), side, and top surfaces. The vehicle contrast metric is the area-weighted sum of the absolute values of the contrasts of the component surfaces. The 3D structure contrast metric, together with target height, account for more than 80% of the variance in probability of detection and 75% of the variance in search time. When false alarm effects are discounted, they account for 89% of the variance in probability of detection and 95% of the variance in search time. The predictive power of the signature metric, when calibrated to half the data and evaluated against the other half, is 90% of the explanatory power.

  17. False-positive cryptococcal antigen latex agglutination caused by disinfectants and soaps.

    PubMed Central

    Blevins, L B; Fenn, J; Segal, H; Newcomb-Gayman, P; Carroll, K C

    1995-01-01

    Five disinfectants or soaps were tested to determine if any could be responsible for false-positive results obtained with the Latex-Crypto Antigen Detection System kit (Immuno-Mycologics, Inc., Norman, Okla.). Three disinfectants or soaps (Derma soap, 7X, and Bacdown) produced false-positive agglutination after repeated washing of ring slides during testing of a known negative cerebrospinal fluid specimen. PMID:7650214

  18. How pH, Temperature, and Time of Incubation Affect False-Positive Responses and Uncertainty of the LAL Gel-Clot Test.

    PubMed

    Lourenço, Felipe Rebello; Botelho, Túlia De Souza; Pinto, Terezinha De Jesus Andreoli

    2012-01-01

    The limulus amebocyte lysate (LAL) test is the simplest and most widely used procedure for detection of endotoxin in parenteral drugs. The LAL test demands optimal pH, ionic strength, temperature, and time of incubation. Slight changes in these parameters may increase the frequency of false-positive responses and the estimated uncertainty of the LAL test. The aim of this paper is to evaluate how changes in the pH, temperature, and time of incubation affect the occurrence of false-positive responses in the LAL test. LAL tests were performed in nominal conditions (37 °C, 60 min, and pH 7) and in different conditions of temperature (36 °C and 38 °C), time of incubation (58 and 62 min), and pH (6 and 8). Slight differences in pH increase the frequency of false-positive responses 5-fold (relative risk 5.0), resulting in an estimated of uncertainty 7.6%. Temperature and time of incubation affect the LAL test less, showing relative risks of 1.5 and 1.0, respectively. Estimated uncertainties in 36 °C or 38 °C temperatures and 58 or 62 min of incubation were found to be 2.0% and 1.0%, respectively. Simultaneous differences in these parameters significantly increase the frequency of false-positive responses. The limulus amebocyte lysate (LAL) gel-clot test is a simple test for detection of endotoxin from Gram-negative bacteria. The test is based on a gel formation when a certain amount of endotoxin is present; it is a pass/fail test. The LAL test requires optimal pH, ionic strength, temperature, and time of incubation. Slight difference in these parameters may increase the frequency of false-positive responses. The aim of this paper is to evaluate how changes in the pH, temperature, and time of incubation affect the occurrence of false-positive responses in the LAL test. We find that slight differences in pH increase the frequency of false-positive responses 5-fold. Temperature and time of incubation affect the LAL test less. Simultaneous differences in these parameters significantly increase the frequency of false-positive responses.

  19. Reducing false positives of microcalcification detection systems by removal of breast arterial calcifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mordang, Jan-Jurre, E-mail: Jan-Jurre.Mordang@radboudumc.nl; Gubern-Mérida, Albert; Karssemeijer, Nico

    Purpose: In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists’ detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performancemore » of the CADe system in finding malignant microcalcifications. Methods: A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. Results: The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8–1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. Conclusions: By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.« less

  20. Episodic Memory Does Not Add Up: Verbatim-Gist Superposition Predicts Violations of the Additive Law of Probability

    PubMed Central

    Brainerd, C. J.; Wang, Zheng; Reyna, Valerie. F.; Nakamura, K.

    2015-01-01

    Fuzzy-trace theory’s assumptions about memory representation are cognitive examples of the familiar superposition property of physical quantum systems. When those assumptions are implemented in a formal quantum model (QEMc), they predict that episodic memory will violate the additive law of probability: If memory is tested for a partition of an item’s possible episodic states, the individual probabilities of remembering the item as belonging to each state must sum to more than 1. We detected this phenomenon using two standard designs, item false memory and source false memory. The quantum implementation of fuzzy-trace theory also predicts that violations of the additive law will vary in strength as a function of reliance on gist memory. That prediction, too, was confirmed via a series of manipulations (e.g., semantic relatedness, testing delay) that are thought to increase gist reliance. Surprisingly, an analysis of the underlying structure of violations of the additive law revealed that as a general rule, increases in remembering correct episodic states do not produce commensurate reductions in remembering incorrect states. PMID:26236091

  1. Dynamic probability control limits for risk-adjusted Bernoulli CUSUM charts.

    PubMed

    Zhang, Xiang; Woodall, William H

    2015-11-10

    The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, the use of a fixed control limit for the chart leads to a quite variable in-control average run length performance for patient populations with different risk score distributions. To overcome this problem, we determine simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, our risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Our simulation results demonstrate that our method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Comparison of probability statistics for automated ship detection in SAR imagery

    NASA Astrophysics Data System (ADS)

    Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.

    1998-12-01

    This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.

  3. Three-dimensional obstacle classification in laser range data

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Bers, Karl-Heinz

    1998-10-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser rangefinders which are presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from wires at over 500 m range (depends on the system) with a high hit-and-detect probability. Despite the efficiency of the sensor, acceptance of current obstacle warning systems by test pilots is not very high, mainly due to the systems' inadequacies in obstacle recognition and visualization. This has motivated the development and the testing of more advanced 3d-scene analysis algorithm at FGAN-FIM to replace the obstacle recognition component of current warning systems. The basic ideas are to increase the recognition probability and to reduce the false alarm rate for hard-to-extract obstacles such as wires, by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. by implementing a hierarchical classification procedure to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition.

  4. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    PubMed Central

    2013-01-01

    Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770

  5. PASTIS: Bayesian extrasolar planet validation - I. General framework, models, and performance

    NASA Astrophysics Data System (ADS)

    Díaz, R. F.; Almenara, J. M.; Santerne, A.; Moutou, C.; Lethuillier, A.; Deleuil, M.

    2014-06-01

    A large fraction of the smallest transiting planet candidates discovered by the Kepler and CoRoT space missions cannot be confirmed by a dynamical measurement of the mass using currently available observing facilities. To establish their planetary nature, the concept of planet validation has been advanced. This technique compares the probability of the planetary hypothesis against that of all reasonably conceivable alternative false positive (FP) hypotheses. The candidate is considered as validated if the posterior probability of the planetary hypothesis is sufficiently larger than the sum of the probabilities of all FP scenarios. In this paper, we present PASTIS, the Planet Analysis and Small Transit Investigation Software, a tool designed to perform a rigorous model comparison of the hypotheses involved in the problem of planet validation, and to fully exploit the information available in the candidate light curves. PASTIS self-consistently models the transit light curves and follow-up observations. Its object-oriented structure offers a large flexibility for defining the scenarios to be compared. The performance is explored using artificial transit light curves of planets and FPs with a realistic error distribution obtained from a Kepler light curve. We find that data support the correct hypothesis strongly only when the signal is high enough (transit signal-to-noise ratio above 50 for the planet case) and remain inconclusive otherwise. PLAnetary Transits and Oscillations of stars (PLATO) shall provide transits with high enough signal-to-noise ratio, but to establish the true nature of the vast majority of Kepler and CoRoT transit candidates additional data or strong reliance on hypotheses priors is needed.

  6. Probabilistic assessment of precipitation-triggered landslides using historical records of landslide occurence, Seattle, Washington

    USGS Publications Warehouse

    Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.

    2004-01-01

    Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.

  7. How many genetic markers to tag an individual? An empirical assessment of false matching rates among close relatives.

    PubMed

    Rew, Mary Beth; Robbins, Jooke; Mattila, David; Palsbøll, Per J; Bérube, Martine

    2011-04-01

    Genetic identification of individuals is now commonplace, enabling the application of tagging methods to elusive species or species that cannot be tagged by traditional methods. A key aspect is determining the number of loci required to ensure that different individuals have non-matching multi-locus genotypes. Closely related individuals are of particular concern because of elevated matching probabilities caused by their recent co-ancestry. This issue may be addressed by increasing the number of loci to a level where full siblings (the relatedness category with the highest matching probability) are expected to have non-matching multi-locus genotypes. However, increasing the number of loci to meet this "full-sib criterion" greatly increases the laboratory effort, which in turn may increase the genotyping error rate resulting in an upward-biased mark-recapture estimate of abundance as recaptures are missed due to genotyping errors. We assessed the contribution of false matches from close relatives among 425 maternally related humpback whales, each genotyped at 20 microsatellite loci. We observed a very low (0.5-4%) contribution to falsely matching samples from pairs of first-order relatives (i.e., parent and offspring or full siblings). The main contribution to falsely matching individuals from close relatives originated from second-order relatives (e.g., half siblings), which was estimated at 9%. In our study, the total number of observed matches agreed well with expectations based upon the matching probability estimated for unrelated individuals, suggesting that the full-sib criterion is overly conservative, and would have required a 280% relative increase in effort. We suggest that, under most circumstances, the overall contribution to falsely matching samples from close relatives is likely to be low, and hence applying the full-sib criterion is unnecessary. In those cases where close relatives may present a significant issue, such as unrepresentative sampling, we propose three different genotyping strategies requiring only a modest increase in effort, which will greatly reduce the number of false matches due to the presence of related individuals.

  8. Bone marrow cells stained by azide-conjugated Alexa fluors in the absence of an alkyne label.

    PubMed

    Lin, Guiting; Ning, Hongxiu; Banie, Lia; Qiu, Xuefeng; Zhang, Haiyang; Lue, Tom F; Lin, Ching-Shwun

    2012-09-01

    Thymidine analog 5-ethynyl-2'-deoxyuridine (EdU) has recently been introduced as an alternative to 5-bromo-2-deoxyuridine (BrdU) for cell labeling and tracking. Incorporation of EdU into replicating DNA can be detected by azide-conjugated fluors (eg, Alexa-azide) through a Cu(i)-catalyzed click reaction between EdU's alkyne moiety and azide. While this cell labeling method has proven to be valuable for tracking transplanted stem cells in various tissues, we have found that some bone marrow cells could be stained by Alexa-azide in the absence of EdU label. In intact rat femoral bone marrow, ~3% of nucleated cells were false-positively stained, and in isolated bone marrow cells, ~13%. In contrast to true-positive stains, which localize in the nucleus, the false-positive stains were cytoplasmic. Furthermore, while true-positive staining requires Cu(i), false-positive staining does not. Reducing the click reaction time or reducing the Alexa-azide concentration failed to improve the distinction between true- and false-positive staining. Hematopoietic and mesenchymal stem cell markers CD34 and Stro-1 did not co-localize with the false-positively stained cells, and these cells' identity remains unknown.

  9. Discovering Peripheral Arterial Disease Cases from Radiology Notes Using Natural Language Processing

    PubMed Central

    Savova, Guergana K.; Fan, Jin; Ye, Zi; Murphy, Sean P.; Zheng, Jiaping; Chute, Christopher G.; Kullo, Iftikhar J.

    2010-01-01

    As part of the Electronic Medical Records and Genomics Network, we applied, extended and evaluated an open source clinical Natural Language Processing system, Mayo’s Clinical Text Analysis and Knowledge Extraction System, for the discovery of peripheral arterial disease cases from radiology reports. The manually created gold standard consisted of 223 positive, 19 negative, 63 probable and 150 unknown cases. Overall accuracy agreement between the system and the gold standard was 0.93 as compared to a named entity recognition baseline of 0.46. Sensitivity for the positive, probable and unknown cases was 0.93–0.96, and for the negative cases was 0.72. Specificity and negative predictive value for all categories were in the 90’s. The positive predictive value for the positive and unknown categories was in the high 90’s, for the negative category was 0.84, and for the probable category was 0.63. We outline the main sources of errors and suggest improvements. PMID:21347073

  10. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by

  11. The Illusion of the Positive: The impact of natural and induced mood on older adults’ false recall

    PubMed Central

    Emery, Lisa; Hess, Thomas M.; Elliot, Tonya

    2012-01-01

    Recent research suggests that affective and motivational processes can influence age differences in memory. In the current study, we examine the impact of both natural and induced mood state on age differences in false recall. Older and younger adults performed a version of the Deese-Roediger-McDermott (DRM; Roediger & McDermott, 1995) false memory paradigm in either their natural mood state or after a positive or negative mood induction. Results indicated that, after accounting for age differences in basic cognitive function, age-related differences in positive mood during the testing session were related to increased false recall in older adults. Inducing older adults into a positive mood also exacerbated age differences in false memory. In contrast, veridical recall did not appear to be systematically influenced by mood. Together, these results suggest that positive mood states can impact older adults’ information processing and potentially increase underlying cognitive age differences. PMID:22292431

  12. The illusion of the positive: the impact of natural and induced mood on older adults' false recall.

    PubMed

    Emery, Lisa; Hess, Thomas M; Elliot, Tonya

    2012-11-01

    Recent research suggests that affective and motivational processes can influence age differences in memory. In the current study, we examine the impact of both natural and induced mood state on age differences in false recall. Older and younger adults performed a version of the Deese-Roediger-McDermott (DRM; Roediger & McDermott, 1995 , Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 803) false memory paradigm in either their natural mood state or after a positive or negative mood induction. Results indicated that, after accounting for age differences in basic cognitive function, age-related differences in positive mood during the testing session were related to increased false recall in older adults. Inducing older adults into a positive mood also exacerbated age differences in false memory. In contrast, veridical recall did not appear to be systematically influenced by mood. Together, these results suggest that positive mood states can impact older adults' information processing and potentially increase underlying cognitive age differences.

  13. Identifying insects with incomplete DNA barcode libraries, African fruit flies (Diptera: Tephritidae) as a test case.

    PubMed

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.

  14. Development of the 7-Item Binge-Eating Disorder Screener (BEDS-7)

    PubMed Central

    Deal, Linda S.; DiBenedetti, Dana B.; Nelson, Lauren; Fehnel, Sheri E.; Brown, T. Michelle

    2016-01-01

    Objective Develop a brief, patient-reported screening tool designed to identify individuals with probable binge-eating disorder (BED) for further evaluation or referral to specialists. Methods Items were developed on the basis of the DSM-5 diagnostic criteria, existing tools, and input from 3 clinical experts (January 2014). Items were then refined in cognitive debriefing interviews with participants self-reporting BED characteristics (March 2014) and piloted in a multisite, cross-sectional, prospective, noninterventional study consisting of a semistructured diagnostic interview (to diagnose BED) and administration of the pilot Binge-Eating Disorder Screener (BEDS), Binge Eating Scale (BES), and RAND 36-Item Short-Form Health Survey (RAND-36) (June 2014–July 2014). The sensitivity and specificity of classification algorithms (formed from the pilot BEDS item-level responses) in predicting BED diagnosis were evaluated. The final algorithm was selected to minimize false negatives and false positives, while utilizing the fewest number of BEDS items. Results Starting with the initial BEDS item pool (20 items), the 13-item pilot BEDS resulted from the cognitive debriefing interviews (n = 13). Of the 97 participants in the noninterventional study, 16 were diagnosed with BED (10/62 female, 16%; 6/35 male, 17%). Seven BEDS items (BEDS-7) yielded 100% sensitivity and 38.7% specificity. Participants correctly identified (true positives) had poorer BES scores and RAND-36 scores than participants identified as true negatives. Conclusions Implementation of the brief, patient-reported BEDS-7 in real-world clinical practice is expected to promote better understanding of BED characteristics and help physicians identify patients who may have BED. PMID:27486542

  15. Development of the 7-Item Binge-Eating Disorder Screener (BEDS-7).

    PubMed

    Herman, Barry K; Deal, Linda S; DiBenedetti, Dana B; Nelson, Lauren; Fehnel, Sheri E; Brown, T Michelle

    2016-01-01

    Develop a brief, patient-reported screening tool designed to identify individuals with probable binge-eating disorder (BED) for further evaluation or referral to specialists. Items were developed on the basis of the DSM-5 diagnostic criteria, existing tools, and input from 3 clinical experts (January 2014). Items were then refined in cognitive debriefing interviews with participants self-reporting BED characteristics (March 2014) and piloted in a multisite, cross-sectional, prospective, noninterventional study consisting of a semistructured diagnostic interview (to diagnose BED) and administration of the pilot Binge-Eating Disorder Screener (BEDS), Binge Eating Scale (BES), and RAND 36-Item Short-Form Health Survey (RAND-36) (June 2014-July 2014). The sensitivity and specificity of classification algorithms (formed from the pilot BEDS item-level responses) in predicting BED diagnosis were evaluated. The final algorithm was selected to minimize false negatives and false positives, while utilizing the fewest number of BEDS items. Starting with the initial BEDS item pool (20 items), the 13-item pilot BEDS resulted from the cognitive debriefing interviews (n = 13). Of the 97 participants in the noninterventional study, 16 were diagnosed with BED (10/62 female, 16%; 6/35 male, 17%). Seven BEDS items (BEDS-7) yielded 100% sensitivity and 38.7% specificity. Participants correctly identified (true positives) had poorer BES scores and RAND-36 scores than participants identified as true negatives. Implementation of the brief, patient-reported BEDS-7 in real-world clinical practice is expected to promote better understanding of BED characteristics and help physicians identify patients who may have BED.

  16. Vegetables- and antioxidant-related nutrients, genetic susceptibility, and non-Hodgkin lymphoma risk

    PubMed Central

    Kelemen, Linda E.; Wang, Sophia S.; Lim, Unhee; Cozen, Wendy; Schenk, Maryjean; Hartge, Patricia; Li, Yan; Rothman, Nathaniel; Davis, Scott; Chanock, Stephen J.; Ward, Mary H.

    2009-01-01

    Genetic susceptibility to DNA oxidation, carcinogen metabolism, and altered DNA repair may increase non-Hodgkin lymphoma (NHL) risk, whereas vegetables-and antioxidant-related nutrients may decrease risk. We evaluated the interaction of a priori-defined dietary factors with 28 polymorphisms in these metabolic pathways. Incident cases (n = 1,141) were identified during 1998–2000 from four cancer registries and frequency-matched to population-based controls (n = 949). We estimated diet-gene joint effects using two-phase semi-parametric maximum-likelihood methods, which utilized genotype data from all subjects as well as 371 cases and 311 controls with available diet information. Adjusted odds ratios (95% confidence intervals) were lower among common allele carriers with higher dietary intakes. For the GSTM3 3-base insertion and higher total vegetable intake, the risk was 0.56 (0.35–0.92, p interaction = 0.03); for GSTP1 A114V and higher cruciferous vegetable intake, the risk was 0.52 (0.34–0.81, p interaction = 0.02); for OGG1 S326C and higher daily zinc intake, the risk was 0.71 (0.47–1.08, p interaction = 0.04) and for XRCC3 T241M and higher green leafy vegetable intake, the risk was 0.63 (0.41–0.97, p interaction = 0.03). Calculation of the false positive report probability determined a high likelihood of falsely positive associations. Although most associations have not been examined previously with NHL, our results suggest the examined polymorphisms are not modifiers of the association between vegetable and zinc intakes and NHL risk. PMID:18204928

  17. Identifying Insects with Incomplete DNA Barcode Libraries, African Fruit Flies (Diptera: Tephritidae) as a Test Case

    PubMed Central

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600

  18. Filtering Entities to Optimize Identification of Adverse Drug Reaction From Social Media: How Can the Number of Words Between Entities in the Messages Help?

    PubMed

    Abdellaoui, Redhouane; Schück, Stéphane; Texier, Nathalie; Burgun, Anita

    2017-06-22

    With the increasing popularity of Web 2.0 applications, social media has made it possible for individuals to post messages on adverse drug reactions. In such online conversations, patients discuss their symptoms, medical history, and diseases. These disorders may correspond to adverse drug reactions (ADRs) or any other medical condition. Therefore, methods must be developed to distinguish between false positives and true ADR declarations. The aim of this study was to investigate a method for filtering out disorder terms that did not correspond to adverse events by using the distance (as number of words) between the drug term and the disorder or symptom term in the post. We hypothesized that the shorter the distance between the disorder name and the drug, the higher the probability to be an ADR. We analyzed a corpus of 648 messages corresponding to a total of 1654 (drug and disorder) pairs from 5 French forums using Gaussian mixture models and an expectation-maximization (EM) algorithm . The distribution of the distances between the drug term and the disorder term enabled the filtering of 50.03% (733/1465) of the disorders that were not ADRs. Our filtering strategy achieved a precision of 95.8% and a recall of 50.0%. This study suggests that such distance between terms can be used for identifying false positives, thereby improving ADR detection in social media. ©Redhouane Abdellaoui, Stéphane Schück, Nathalie Texier, Anita Burgun. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 22.06.2017.

  19. Evaluation of the Architect HIV Ag/Ab Combo Assay in a low-prevalence setting: The role of samples with a low S/CO ratio.

    PubMed

    Alonso, Roberto; Pérez-García, Felipe; Gijón, Paloma; Collazos, Ana; Bouza, Emilio

    2018-06-01

    The Architect HIV Ag/Ab Combo Assay, a fourth-generation ELISA, has proven to be highly reliable for the diagnosis of HIV infection. However, its high sensitivity may lead to false-positive results. To evaluate the diagnostic performance of Architect in a low-prevalence population and to assess the role of the sample-to-cutoff ratio (S/CO) in reducing the frequency of false-positive results. We conducted a retrospective study of samples analyzed by Architect between January 2015 and June 2017. Positive samples were confirmed by immunoblot (RIBA) or nucleic acid amplification tests (NAATs). Different S/CO thresholds (1, 2.5, 10, 25, and 100) were analyzed to determine sensitivity, specificity, and negative and positive predictive values (NPV, PPV). ROC analysis was used to determine the optimal S/CO. A total of 69,471 samples were analyzed. 709 (1.02%) were positive by Architect. Of these, 63 (8.89%) were false-positive results. Most of them (93.65%) were in samples with S/CO < 100. However, most confirmations by NAATs (12 out of 19 cases) were also recorded for these samples. The optimal S/CO was 2.5, which provided the highest area under the ROC curve (0.9998) and no false-negative results. With this S/CO, sensitivity and specificity were 100.0%, and PPV and NPV were 95.8% and 100.0%, respectively. In addition, the frequency of false-positive results decreased significantly to 4.15%. Although Architect generates a relatively high number of false-positive results, raising the S/CO limit too much to increase specificity can lead to false-negative results, especially in newly infected individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Prehospital Acute ST-Elevation Myocardial Infarction Identification in San Diego: A Retrospective Analysis of the Effect of a New Software Algorithm.

    PubMed

    Coffey, Christanne; Serra, John; Goebel, Mat; Espinoza, Sarah; Castillo, Edward; Dunford, James

    2018-05-03

    A significant increase in false positive ST-elevation myocardial infarction (STEMI) electrocardiogram interpretations was noted after replacement of all of the City of San Diego's 110 monitor-defibrillator units with a new brand. These concerns were brought to the manufacturer and a revised interpretive algorithm was implemented. This study evaluated the effects of a revised interpretation algorithm to identify STEMI when used by San Diego paramedics. Data were reviewed 6 months before and 6 months after the introduction of a revised interpretation algorithm. True-positive and false-positive interpretations were identified. Factors contributing to an incorrect interpretation were assessed and patient demographics were collected. A total of 372 (234 preimplementation, 138 postimplementation) cases met inclusion criteria. There was a significant reduction in false positive STEMI (150 preimplementation, 40 postimplementation; p < 0.001) after implementation. The most common factors resulting in false positive before implementation were right bundle branch block, left bundle branch block, and atrial fibrillation. The new algorithm corrected for these misinterpretations with most postimplementation false positives attributed to benign early repolarization and poor data quality. Subsequent follow-up at 10 months showed maintenance of the observed reduction in false positives. This study shows that introducing a revised 12-lead interpretive algorithm resulted in a significant reduction in the number of false positive STEMI electrocardiogram interpretations in a large urban emergency medical services system. Rigorous testing and standardization of new interpretative software is recommended before introduction into a clinical setting to prevent issues resulting from inappropriate cardiac catheterization laboratory activations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Multi-level scanning method for defect inspection

    DOEpatents

    Bokor, Jeffrey; Jeong, Seongtae

    2002-01-01

    A method for performing scanned defect inspection of a collection of contiguous areas using a specified false-alarm-rate and capture-rate within an inspection system that has characteristic seek times between inspection locations. The multi-stage method involves setting an increased false-alarm-rate for a first stage of scanning, wherein subsequent stages of scanning inspect only the detected areas of probable defects at lowered values for the false-alarm-rate. For scanning inspection operations wherein the seek time and area uncertainty is favorable, the method can substantially increase inspection throughput.

  2. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    PubMed

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  3. ROCS: a Reproducibility Index and Confidence Score for Interaction Proteomics Studies

    PubMed Central

    2012-01-01

    Background Affinity-Purification Mass-Spectrometry (AP-MS) provides a powerful means of identifying protein complexes and interactions. Several important challenges exist in interpreting the results of AP-MS experiments. First, the reproducibility of AP-MS experimental replicates can be low, due both to technical variability and the dynamic nature of protein interactions in the cell. Second, the identification of true protein-protein interactions in AP-MS experiments is subject to inaccuracy due to high false negative and false positive rates. Several experimental approaches can be used to mitigate these drawbacks, including the use of replicated and control experiments and relative quantification to sensitively distinguish true interacting proteins from false ones. Methods To address the issues of reproducibility and accuracy of protein-protein interactions, we introduce a two-step method, called ROCS, which makes use of Indicator Prey Proteins to select reproducible AP-MS experiments, and of Confidence Scores to select specific protein-protein interactions. The Indicator Prey Proteins account for measures of protein identifiability as well as protein reproducibility, effectively allowing removal of outlier experiments that contribute noise and affect downstream inferences. The filtered set of experiments is then used in the Protein-Protein Interaction (PPI) scoring step. Prey protein scoring is done by computing a Confidence Score, which accounts for the probability of occurrence of prey proteins in the bait experiments relative to the control experiment, where the significance cutoff parameter is estimated by simultaneously controlling false positives and false negatives against metrics of false discovery rate and biological coherence respectively. In summary, the ROCS method relies on automatic objective criterions for parameter estimation and error-controlled procedures. Results We illustrate the performance of our method by applying it to five previously published AP-MS experiments, each containing well characterized protein interactions, allowing for systematic benchmarking of ROCS. We show that our method may be used on its own to make accurate identification of specific, biologically relevant protein-protein interactions, or in combination with other AP-MS scoring methods to significantly improve inferences. Conclusions Our method addresses important issues encountered in AP-MS datasets, making ROCS a very promising tool for this purpose, either on its own or in conjunction with other methods. We anticipate that our methodology may be used more generally in proteomics studies and databases, where experimental reproducibility issues arise. The method is implemented in the R language, and is available as an R package called “ROCS”, freely available from the CRAN repository http://cran.r-project.org/. PMID:22682516

  4. Interval Breast Cancer Rates and Histopathologic Tumor Characteristics after False-Positive Findings at Mammography in a Population-based Screening Program.

    PubMed

    Hofvind, Solveig; Sagstad, Silje; Sebuødegård, Sofie; Chen, Ying; Roman, Marta; Lee, Christoph I

    2018-04-01

    Purpose To compare rates and tumor characteristics of interval breast cancers (IBCs) detected after a negative versus false-positive screening among women participating in the Norwegian Breast Cancer Screening Program. Materials and Methods The Cancer Registry Regulation approved this retrospective study. Information about 423 445 women aged 49-71 years who underwent 789 481 full-field digital mammographic screening examinations during 2004-2012 was extracted from the Cancer Registry of Norway. Rates and odds ratios of IBC among women with a negative (the reference group) versus a false-positive screening were estimated by using logistic regression models adjusted for age at diagnosis and county of residence. Results A total of 1302 IBCs were diagnosed after 789 481 screening examinations, of which 7.0% (91 of 1302) were detected among women with a false-positive screening as the most recent breast imaging examination before detection. By using negative screening as the reference, adjusted odds ratios of IBCs were 3.3 (95% confidence interval [CI]: 2.6, 4.2) and 2.8 (95% CI: 1.8, 4.4) for women with a false-positive screening without and with needle biopsy, respectively. Women with a previous negative screening had a significantly lower proportion of tumors that were 10 mm or less (14.3% [150 of 1049] vs 50.0% [seven of 14], respectively; P < .01) and grade I tumors (13.2% [147 of 1114] vs 42.9% [six of 14]; P < .01), but a higher proportion of cases with lymph nodes positive for cancer (40.9% [442 of 1080] vs 13.3% [two of 15], respectively; P = .03) compared with women with a previous false-positive screening with benign biopsy. A retrospective review of the screening mammographic examinations identified 42.9% (39 of 91) of the false-positive cases to be the same lesion as the IBC. Conclusion By using a negative screening as the reference, a false-positive screening examination increased the risk of an IBC three-fold. The tumor characteristics of IBC after a negative screening were less favorable compared with those detected after a previous false-positive screening. © RSNA, 2017 Online supplemental material is available for this article.

  5. Windshear warning aerospatiale approach

    NASA Technical Reports Server (NTRS)

    Bonafe, J. L.

    1988-01-01

    Vugraphs and transcribed remarks of a presentation on Aerospatiale's approach to windshear warning systems are given. Information is given on low altitude wind shear probability, wind shear warning models and warning system false alarms.

  6. Interference Information Based Power Control for Cognitive Radio with Multi-Hop Cooperative Sensing

    NASA Astrophysics Data System (ADS)

    Yu, Youngjin; Murata, Hidekazu; Yamamoto, Koji; Yoshida, Susumu

    Reliable detection of other radio systems is crucial for systems that share the same frequency band. In wireless communication channels, there is uncertainty in the received signal level due to multipath fading and shadowing. Cooperative sensing techniques in which radio stations share their sensing information can improve the detection probability of other systems. In this paper, a new cooperative sensing scheme that reduces the false detection probability while maintaining the outage probability of other systems is investigated. In the proposed system, sensing information is collected using multi-hop transmission from all sensing stations that detect other systems, and transmission decisions are based on the received sensing information. The proposed system also controls the transmit power based on the received CINRs from the sensing stations. Simulation results reveal that the proposed system can reduce the outage probability of other systems, or improve its link success probability.

  7. Coincidence probabilities for spacecraft gravitational wave experiments - Massive coalescing binaries

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo; Armstrong, J. W.

    1991-01-01

    Massive coalescing binary systems are candidate sources of gravitational radiation in the millihertz frequency band accessible to spacecraft Doppler tracking experiments. This paper discusses signal processing and detection probability for waves from coalescing binaries in the regime where the signal frequency increases linearly with time, i.e., 'chirp' signals. Using known noise statistics, thresholds with given false alarm probabilities are established for one- and two-spacecraft experiments. Given the threshold, the detection probability is calculated as a function of gravitational wave amplitude for both one- and two-spacecraft experiments, assuming random polarization states and under various assumptions about wave directions. This allows quantitative statements about the detection efficiency of these experiments and the utility of coincidence experiments. In particular, coincidence probabilities for two-spacecraft experiments are insensitive to the angle between the directions to the two spacecraft, indicating that near-optical experiments can be done without constraints on spacecraft trajectories.

  8. Remote Measurements of Heart and Respiration Rates for Telemedicine

    PubMed Central

    Qian, Yi; Tsien, Joe Z.

    2013-01-01

    Non-contact and low-cost measurements of heart and respiration rates are highly desirable for telemedicine. Here, we describe a novel technique to extract blood volume pulse and respiratory wave from a single channel images captured by a video camera for both day and night conditions. The principle of our technique is to uncover the temporal dynamics of heart beat and breathing rate through delay-coordinate transformation and independent component analysis-based deconstruction of the single channel images. Our method further achieves robust elimination of false positives via applying ratio-variation probability distributions filtering approaches. Moreover, it enables a much needed low-cost means for preventing sudden infant death syndrome in new born infants and detecting stroke and heart attack in elderly population in home environments. This noncontact-based method can also be applied to a variety of animal model organisms for biomedical research. PMID:24115996

  9. The Quantitative Science of Evaluating Imaging Evidence.

    PubMed

    Genders, Tessa S S; Ferket, Bart S; Hunink, M G Myriam

    2017-03-01

    Cardiovascular diagnostic imaging tests are increasingly used in everyday clinical practice, but are often imperfect, just like any other diagnostic test. The performance of a cardiovascular diagnostic imaging test is usually expressed in terms of sensitivity and specificity compared with the reference standard (gold standard) for diagnosing the disease. However, evidence-based application of a diagnostic test also requires knowledge about the pre-test probability of disease, the benefit of making a correct diagnosis, the harm caused by false-positive imaging test results, and potential adverse effects of performing the test itself. To assist in clinical decision making regarding appropriate use of cardiovascular diagnostic imaging tests, we reviewed quantitative concepts related to diagnostic performance (e.g., sensitivity, specificity, predictive values, likelihood ratios), as well as possible biases and solutions in diagnostic performance studies, Bayesian principles, and the threshold approach to decision making. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  10. A plastic scintillator-based muon tomography system with an integrated muon spectrometer

    NASA Astrophysics Data System (ADS)

    Anghel, V.; Armitage, J.; Baig, F.; Boniface, K.; Boudjemline, K.; Bueno, J.; Charles, E.; Drouin, P.-L.; Erlandson, A.; Gallant, G.; Gazit, R.; Godin, D.; Golovko, V. V.; Howard, C.; Hydomako, R.; Jewett, C.; Jonkmans, G.; Liu, Z.; Robichaud, A.; Stocki, T. J.; Thompson, M.; Waller, D.

    2015-10-01

    A muon scattering tomography system which uses extruded plastic scintillator bars for muon tracking and a dedicated muon spectrometer that measures scattering through steel slabs has been constructed and successfully tested. The atmospheric muon detection efficiency is measured to be 97% per plane on average and the average intrinsic hit resolution is 2.5 mm. In addition to creating a variety of three-dimensional images of objects of interest, a quantitative study has been carried out to investigate the impact of including muon momentum measurements when attempting to detect high-density, high-Z material. As expected, the addition of momentum information improves the performance of the system. For a fixed data-taking time of 60 s and a fixed false positive fraction, the probability to detect a target increases when momentum information is used. This is the first demonstration of the use of muon momentum information from dedicated spectrometer measurements in muon scattering tomography.

  11. Family and twin strategies as a head start in defining prodromes and endophenotypes for hypothetical early-interventions in schizophrenia.

    PubMed

    Gottesman, I I; Erlenmeyer-Kimling, L

    2001-08-01

    In an effort to share the experiences of 'genotype-hunters'-who have approached the difficult task of forecasting future schizophrenia in the young offspring or other relatives of index cases, in new samples guided by the prior probabilities of 15% in offspring or 50% in identical co-twins-with 'early-interventionists'-who focus on purported prodromal symptoms in children who would be treated pharmacologically to prevent the development of schizophrenia-we provide a focused review that emphasizes the hazards of false positives in both approaches. Despite the advantages prospective high-risk strategies have had from clinical and laboratory findings that implicate some prodromal signs and endophenotypes, e.g. attention, memory, and information processing evaluations, the yields are not sufficient for practical applications involving antipsychotic drugs for undiagnosed children. Even more caution than usual is required, given the suggestions that the developing neocortex is vulnerable to dopaminergic exposure.

  12. Design of a DNA chip for detection of unknown genetically modified organisms (GMOs).

    PubMed

    Nesvold, Håvard; Kristoffersen, Anja Bråthen; Holst-Jensen, Arne; Berdal, Knut G

    2005-05-01

    Unknown genetically modified organisms (GMOs) have not undergone a risk evaluation, and hence might pose a danger to health and environment. There are, today, no methods for detecting unknown GMOs. In this paper we propose a novel method intended as a first step in an approach for detecting unknown genetically modified (GM) material in a single plant. A model is designed where biological and combinatorial reduction rules are applied to a set of DNA chip probes containing all possible sequences of uniform length n, creating probes capable of detecting unknown GMOs. The model is theoretically tested for Arabidopsis thaliana Columbia, and the probabilities for detecting inserts and receiving false positives are assessed for various parameters for this organism. From a theoretical standpoint, the model looks very promising but should be tested further in the laboratory. The model and algorithms will be available upon request to the corresponding author.

  13. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  14. Automated matching of supine and prone colonic polyps based on PCA and SVMs

    NASA Astrophysics Data System (ADS)

    Wang, Shijun; Van Uitert, Robert L.; Summers, Ronald M.

    2008-03-01

    Computed tomographic colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. In current practice, a patient will be scanned twice during the CTC examination - once supine and once prone. In order to assist the radiologists in evaluating colon polyp candidates in both scans, we expect the computer aided detection (CAD) system can provide not only the locations of suspicious polyps, but also the possible matched pairs of polyps in two scans. In this paper, we propose a new automated matching method based on the extracted features of polyps by using principal component analysis (PCA) and Support Vector Machines (SVMs). Our dataset comes from the 104 CT scans of 52 patients with supine and prone positions collected from three medical centers. From it we constructed two groups of matched polyp candidates according to the size of true polyps: group A contains 12 true polyp pairs (> 9 mm) and 454 false pairs; group B contains 24 true polyp pairs (6-9 mm) and 514 false pairs. By using PCA, we reduced the dimensions of original data (with 157 attributes) to 30 dimensions. We did leave-one-patient-out test on the two groups of data. ROC analysis shows that it is easier to match bigger polyps than that of smaller polyps. On group A data, when false alarm probability is 0.18, the sensitivity of SVM achieves 0.83 which shows that automated matching of polyp candidates is practicable for clinical applications.

  15. A statistical model of false negative and false positive detection of phase singularities.

    PubMed

    Jacquemet, Vincent

    2017-10-01

    The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.

  16. The ranking probability approach and its usage in design and analysis of large-scale studies.

    PubMed

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  17. Motivation and effort in individuals with social anhedonia

    PubMed Central

    McCarthy, Julie M.; Treadway, Michael T.; Blanchard, Jack J.

    2015-01-01

    It has been proposed that anhedonia may, in part, reflect difficulties in reward processing and effortful decision-making. The current study aimed to replicate previous findings of effortful decision-making deficits associated with elevated anhedonia and expand upon these findings by investigating whether these decision-making deficits are specific to elevated social anhedonia or are also associated with elevated positive schizotypy characteristics. The current study compared controls (n = 40) to individuals elevated on social anhedonia (n = 30), and individuals elevated on perceptual aberration/magical ideation (n = 30) on the Effort Expenditure for Rewards Task (EEfRT). Across groups, participants chose a higher proportion of hard tasks with increasing probability of reward and reward magnitude, demonstrating sensitivity to probability and reward values. Contrary to our expectations, when the probability of reward was most uncertain (50% probability), at low and medium reward values, the social anhedonia group demonstrated more effortful decision-making than either individuals high in positive schizotypy or controls. The positive schizotypy group only differed from controls (making less effortful choices than controls) when reward probability was lowest (12%) and the magnitude of reward was the smallest. Our results suggest that social anhedonia is related to intact motivation and effort for monetary rewards, but that individuals with this characteristic display a unique and perhaps inefficient pattern of effort allocation when the probability of reward is most uncertain. Future research is needed to better understand effortful decision-making and the processing of reward across a range of individual difference characteristics. PMID:25888337

  18. A stochastic inference of de novo CNV detection and association test in multiplex schizophrenia families.

    PubMed

    Wang, Shi-Heng; Chen, Wei J; Tsai, Yu-Chin; Huang, Yung-Hsiang; Hwu, Hai-Gwo; Hsiao, Chuhsing K

    2013-01-01

    The copy number variation (CNV) is a type of genetic variation in the genome. It is measured based on signal intensity measures and can be assessed repeatedly to reduce the uncertainty in PCR-based typing. Studies have shown that CNVs may lead to phenotypic variation and modification of disease expression. Various challenges exist, however, in the exploration of CNV-disease association. Here we construct latent variables to infer the discrete CNV values and to estimate the probability of mutations. In addition, we propose to pool rare variants to increase the statistical power and we conduct family studies to mitigate the computational burden in determining the composition of CNVs on each chromosome. To explore in a stochastic sense the association between the collapsing CNV variants and disease status, we utilize a Bayesian hierarchical model incorporating the mutation parameters. This model assigns integers in a probabilistic sense to the quantitatively measured copy numbers, and is able to test simultaneously the association for all variants of interest in a regression framework. This integrative model can account for the uncertainty in copy number assignment and differentiate if the variation was de novo or inherited on the basis of posterior probabilities. For family studies, this model can accommodate the dependence within family members and among repeated CNV data. Moreover, the Mendelian rule can be assumed under this model and yet the genetic variation, including de novo and inherited variation, can still be included and quantified directly for each individual. Finally, simulation studies show that this model has high true positive and low false positive rates in the detection of de novo mutation.

  19. False-positive IgM for CMV in pregnant women with autoimmune disease: a novel prognostic factor for poor pregnancy outcome.

    PubMed

    De Carolis, S; Santucci, S; Botta, A; Garofalo, S; Martino, C; Perrelli, A; Salvi, S; Degennaro, Va; de Belvis, Ag; Ferrazzani, S; Scambia, G

    2010-06-01

    Our aims were to assess the frequency of false-positive IgM antibodies for cytomegalovirus in pregnant women with autoimmune diseases and in healthy women (controls) and to determine their relationship with pregnancy outcome. Data from 133 pregnancies in 118 patients with autoimmune diseases and from 222 pregnancies in 198 controls were assessed. When positive IgM for cytomegalovirus was detected, IgG avidity, cytomegalovirus isolation and polymerase chain reaction for CMV-DNA in maternal urine and amniotic fluid samples were performed in order to identify primary infection or false positivity. A statistically significantly higher rate of false-positive IgM was found in pregnancies with autoimmune diseases (16.5%) in comparison with controls (0.9%). A worse pregnancy outcome was observed among patients with autoimmune disease and false cytomegalovirus IgM in comparison with those without false positivity: earlier week of delivery (p = 0.017), lower neonatal birth weight (p = 0.0004) and neonatal birth weight percentile (p = 0.002), higher rate of intrauterine growth restriction (p = 0.02) and babies weighing less than 2000 g (p = 0.025) were encountered. The presence of false cytomegalovirus IgM in patients with autoimmune diseases could be used as a novel prognostic index of poor pregnancy outcome: it may reflect a non-specific activation of the immune system that could negatively affect pregnancy outcome. Lupus (2010) 19, 844-849.

  20. [Analysis of false-positive reaction for bacterial detection of blood samples with the automated microbial detection system BacT/ALERT 3D].

    PubMed

    Zhu, Li-Wei; Yang, Xue-Mei; Xu, Xiao-Qin; Xu, Jian; Lu, Huang-Jun; Yan, Li-Xing

    2008-10-01

    This study was aimed to analyze the results of false positive reaction in bacterial detection of blood samples with BacT/ALERT 3D system, to evaluate the specificity of this system, and to decrease the false positive reaction. Each reaction flasks in past five years were processed for bacteria isolation and identification. When the initial cultures were positive, the remaining samples and the corresponding units were recultured if still available. 11395 blood samples were detected. It is worthy of note that the incubator temperature should be stabilized, avoiding fluctuation; when the cultures were alarmed, the reaction flasks showed be kept some hours for further incubation so as to trace a sharply increasing signal to support the judgement of true bacterial growth. The results indicated that 122 samples (1.07%) wee positive at initial culture, out of them 107 samples (88.7%) were found bacterial, and 15 samples (12.3%) were found nothing. The detection curves of positive samples resulted from bacterial growth showed ascent. In conclusion, maintenance of temperature stability and avoidance of temperature fluctuation in incubator could decrease the occurrence of false-positive reaction in detection process. The reaction flasks with positive results at initial culture should be recultured, and whether existence of a sharply ascending logarilhimic growth phase in bacterial growth curve should be further detected, which are helpful to distinguish false-positive reactions from true positive, and thus increase the specificity of the BacT/ALERT system.

Top