Muthukumar, Alagarraju; Alatoom, Adnan; Burns, Susan; Ashmore, Jerry; Kim, Anne; Emerson, Brian; Bannister, Edward; Ansari, M Qasim
2015-01-01
To assess the false-positive and false-negative rates of a 4th-generation human immunodeficiency virus (HIV) assay, the Abbott ARCHITECT, vs 2 HIV 3rd-generation assays, the Siemens Centaur and the Ortho-Clinical Diagnostics Vitros. We examined 123 patient specimens. In the first phase of the study, we compared 99 specimens that had a positive screening result via the 3rd-generation Vitros assay (10 positive, 82 negative, and 7 indeterminate via confirmatory immunofluorescent assay [IFA]/Western blot [WB] testing). In the second phase, we assessed 24 HIV-1 RNA-positive (positive result via the nuclear acid amplification test [NAAT] and negative/indeterminate results via the WB test) specimens harboring acute HIV infection. The 4th-generation ARCHITECT assay yielded fewer false-positive results (n = 2) than the 3rd-generation Centaur (n = 9; P = .02) and Vitros (n = 82; P <.001) assays. One confirmed positive case had a false-negative result via the Centaur assay. When specimens from the 24 patients with acute HIV-1 infection were tested, the ARCHITECT assay yielded fewer false-negative results (n = 5) than the Centaur (n = 10) (P = .13) and the other 3rd-generation tests (n = 16) (P = .002). This study indicates that the 4th-generation ARCHITECT HIV assay yields fewer false-positive and false-negative results than the 3rd-generation HIV assays we tested. Copyright© by the American Society for Clinical Pathology (ASCP).
Paige F.B. Ferguson; Michael J. Conroy; Jeffrey Hepinstall-Cymerman; Nigel Yoccoz
2015-01-01
False positive detections, such as species misidentifications, occur in ecological data, although many models do not account for them. Consequently, these models are expected to generate biased inference.The main challenge in an analysis of data with false positives is to distinguish false positive and false negative...
Daxboeck, Florian; Dornbusch, Hans Jürgen; Krause, Robert; Assadian, Ojan; Wenisch, Christoph
2004-01-01
A small but significant proportion of blood cultures processed by the BACTEC 9000 series systems is signaled positive, while subsequent Gram's stain and culture on solid media yield no pathogens. In this study, 15 "false-positive" vials (7 aerobes, 8 anaerobes) from 15 patients were investigated for the presence of bacteria and fungi by eubacterial 16S rDNA and panfungal 18S rDNA amplification, respectively. All samples turned out negative by both methods. Most patients (7) had neutropenia, which does not support the theory that high leukocyte counts enhance the generation of false-positive results. In conclusion, the results of this study indicate that false-negative results generated by the BACTEC 9000 series are inherent to the automated detection and not due to the growth of fastidious organisms.
Stochastic resonance-enhanced laser-based particle detector.
Dutta, A; Werner, C
2009-01-01
This paper presents a Laser-based particle detector whose response was enhanced by modulating the Laser diode with a white-noise generator. A Laser sheet was generated to cast a shadow of the object on a 200 dots per inch, 512 x 1 pixels linear sensor array. The Laser diode was modulated with a white-noise generator to achieve stochastic resonance. The white-noise generator essentially amplified the wide-bandwidth (several hundred MHz) noise produced by a reverse-biased zener diode operating in junction-breakdown mode. The gain in the amplifier in the white-noise generator was set such that the Receiver Operating Characteristics plot provided the best discriminability. A monofiber 40 AWG (approximately 80 microm) wire was detected with approximately 88% True Positive rate and approximately 19% False Positive rate in presence of white-noise modulation and with approximately 71% True Positive rate and approximately 15% False Positive rate in absence of white-noise modulation.
Chacón, Lucía; Mateos, María Luisa; Holguín, África
2017-07-01
Despite the high specificity of fourth-generation enzyme immunoassays (4th-gen-EIA) for screening during HIV diagnosis, their positive predictive value is low in populations with low HIV prevalence. Thus, screening should be optimized to reduce false positive results. The influence of sample cutoff (S/CO) values by a 4th-gen-EIA with the false positive rate during the routine HIV diagnosis in a low HIV prevalence population was evaluated. A total of 30,201 sera were tested for HIV diagnosis using Abbott Architect ® HIV-Ag/Ab-Combo 4th-gen-EIA at a hospital in Spain during 17 months. Architect S/CO values were recorded, comparing the HIV-1 positive results following Architect interpretation (S/CO≥1) with the final HIV-1 diagnosis by confirmatory tests (line immunoassay, LIA and/or nucleic acid test, NAT). ROC curve was also performed. Among the 30,201 HIV performed tests, 256 (0.85%) were positive according to Architect interpretation (S/CO≥1) but only 229 (0.76%) were definitively HIV-1 positive after LIA and/or NAT. Thus, 27 (10.5%) of 256 samples with S/CO≥1 by Architect were false positive diagnose. The false positive rate decreased when the S/CO ratio increased. All 19 samples with S/CO ≤10 were false positives and all 220 with S/CO>50 true HIV-positives. The optimal S/CO cutoff value provided by ROC curves was 32.7. No false negative results were found. We show that very low S/CO values during HIV-1 screening using Architect can result HIV negative after confirmation by LIA and NAT. The false positive rate is reduced when S/CO increases. Copyright © 2017 Elsevier B.V. All rights reserved.
Applying a CAD-generated imaging marker to assess short-term breast cancer risk
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin
2018-02-01
Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.
Alonso, Roberto; Pérez-García, Felipe; Gijón, Paloma; Collazos, Ana; Bouza, Emilio
2018-06-01
The Architect HIV Ag/Ab Combo Assay, a fourth-generation ELISA, has proven to be highly reliable for the diagnosis of HIV infection. However, its high sensitivity may lead to false-positive results. To evaluate the diagnostic performance of Architect in a low-prevalence population and to assess the role of the sample-to-cutoff ratio (S/CO) in reducing the frequency of false-positive results. We conducted a retrospective study of samples analyzed by Architect between January 2015 and June 2017. Positive samples were confirmed by immunoblot (RIBA) or nucleic acid amplification tests (NAATs). Different S/CO thresholds (1, 2.5, 10, 25, and 100) were analyzed to determine sensitivity, specificity, and negative and positive predictive values (NPV, PPV). ROC analysis was used to determine the optimal S/CO. A total of 69,471 samples were analyzed. 709 (1.02%) were positive by Architect. Of these, 63 (8.89%) were false-positive results. Most of them (93.65%) were in samples with S/CO < 100. However, most confirmations by NAATs (12 out of 19 cases) were also recorded for these samples. The optimal S/CO was 2.5, which provided the highest area under the ROC curve (0.9998) and no false-negative results. With this S/CO, sensitivity and specificity were 100.0%, and PPV and NPV were 95.8% and 100.0%, respectively. In addition, the frequency of false-positive results decreased significantly to 4.15%. Although Architect generates a relatively high number of false-positive results, raising the S/CO limit too much to increase specificity can lead to false-negative results, especially in newly infected individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Masking as an effective quality control method for next-generation sequencing data analysis.
Yun, Sajung; Yun, Sijung
2014-12-13
Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).
Detecting false positives in multielement designs: implications for brief assessments.
Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L
2011-11-01
The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.
Mordang, Jan-Jurre; Gubern-Mérida, Albert; Bria, Alessandro; Tortorella, Francesco; den Heeten, Gerard; Karssemeijer, Nico
2017-04-01
Computer-aided detection (CADe) systems for mammography screening still mark many false positives. This can cause radiologists to lose confidence in CADe, especially when many false positives are obviously not suspicious to them. In this study, we focus on obvious false positives generated by microcalcification detection algorithms. We aim at reducing the number of obvious false-positive findings by adding an additional step in the detection method. In this step, a multiclass machine learning method is implemented in which dedicated classifiers learn to recognize the patterns of obvious false-positive subtypes that occur most frequently. The method is compared to a conventional two-class approach, where all false-positive subtypes are grouped together in one class, and to the baseline CADe system without the new false-positive removal step. The methods are evaluated on an independent dataset containing 1,542 screening examinations of which 80 examinations contain malignant microcalcifications. Analysis showed that the multiclass approach yielded a significantly higher sensitivity compared to the other two methods (P < 0.0002). At one obvious false positive per 100 images, the baseline CADe system detected 61% of the malignant examinations, while the systems with the two-class and multiclass false-positive reduction step detected 73% and 83%, respectively. Our study showed that by adding the proposed method to a CADe system, the number of obvious false positives can decrease significantly (P < 0.0002). © 2017 American Association of Physicists in Medicine.
Kufa, Tendesayi; Kharsany, Ayesha BM; Cawood, Cherie; Khanyile, David; Lewis, Lara; Grobler, Anneke; Chipeta, Zawadi; Bere, Alfred; Glenshaw, Mary; Puren, Adrian
2017-01-01
Abstract Introduction: We describe the overall accuracy and performance of a serial rapid HIV testing algorithm used in community-based HIV testing in the context of a population-based household survey conducted in two sub-districts of uMgungundlovu district, KwaZulu-Natal, South Africa, against reference fourth-generation HIV-1/2 antibody and p24 antigen combination immunoassays. We discuss implications of the findings on rapid HIV testing programmes. Methods: Cross-sectional design: Following enrolment into the survey, questionnaires were administered to eligible and consenting participants in order to obtain demographic and HIV-related data. Peripheral blood samples were collected for HIV-related testing. Participants were offered community-based HIV testing in the home by trained field workers using a serial algorithm with two rapid diagnostic tests (RDTs) in series. In the laboratory, reference HIV testing was conducted using two fourth-generation immunoassays with all positives in the confirmatory test considered true positives. Accuracy, sensitivity, specificity, positive predictive value, negative predictive value and false-positive and false-negative rates were determined. Results: Of 10,236 individuals enrolled in the survey, 3740 were tested in the home (median age 24 years (interquartile range 19–31 years), 42.1% males and HIV positivity on RDT algorithm 8.0%). From those tested, 3729 (99.7%) had a definitive RDT result as well as a laboratory immunoassay result. The overall accuracy of the RDT when compared to the fourth-generation immunoassays was 98.8% (95% confidence interval (CI) 98.5–99.2). The sensitivity, specificity, positive predictive value and negative predictive value were 91.1% (95% CI 87.5–93.7), 99.9% (95% CI 99.8–100), 99.3% (95% CI 97.4–99.8) and 99.1% (95% CI 98.8–99.4) respectively. The false-positive and false-negative rates were 0.06% (95% CI 0.01–0.24) and 8.9% (95% CI 6.3–12.53). Compared to true positives, false negatives were more likely to be recently infected on limited antigen avidity assay and to report antiretroviral therapy (ART) use. Conclusions: The overall accuracy of the RDT algorithm was high. However, there were few false positives, and the sensitivity was lower than expected with high false negatives, despite implementation of quality assurance measures. False negatives were associated with recent (early) infection and ART exposure. The RDT algorithm was able to correctly identify the majority of HIV infections in community-based HIV testing. Messaging on the potential for false positives and false negatives should be included in these programmes. PMID:28872274
Neural events that underlie remembering something that never happened.
Gonsalves, B; Paller, K A
2000-12-01
We induced people to experience a false-memory illusion by first asking them to visualize common objects when cued with the corresponding word; on some trials, a photograph of the object was presented 1800 ms after the cue word. We then tested their memory for the photographs. Posterior brain potentials in response to words at encoding were more positive if the corresponding object was later falsely remembered as a photograph. Similar brain potentials during the memory test were more positive for true than for false memories. These results implicate visual imagery in the generation of false memories and provide neural correlates of processing differences between true and false memories.
Markovits, Henry; Lortie-Forgues, Hugues
2011-01-01
Abstract reasoning is critical for science and mathematics, but is very difficult. In 3 studies, the hypothesis that alternatives generation required for conditional reasoning with false premises facilitates abstract reasoning is examined. Study 1 (n = 372) found that reasoning with false premises improved abstract reasoning in 12- to 15-year-olds. Study 2 (n = 366) found a positive effect of simply generating alternatives, but only in 19-year-olds. Study 3 (n = 92) found that 9- to 11-year-olds were able to respond logically with false premises, whereas no such ability was observed in 6- to 7-year-olds. Reasoning with false premises was found to improve reasoning with semiabstract premises in the older children. These results support the idea that alternatives generation with false premises facilitates abstract reasoning. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.
Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P
2018-01-01
Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.
Assessment of potential false positives via orbitrap-based untargeted lipidomics from rat tissues.
Xu, Lina; Wang, Xueying; Jiao, Yupei; Liu, Xiaohui
2018-02-01
Untargeted lipidomics is increasingly popular due to the broad coverage of lipid species. Data dependent MS/MS acquisition is commonly used in order to acquire sufficient information for confident lipid assignment. However, although lipids are identified based on MS/MS confirmation, a number of false positives are still observed. Here, we discuss several causes of introducing lipid false identifications in untargeted analysis. Phosphotidylcholines and cholesteryl esters generate in-source fragmentation to produce dimethylated phosphotidylethanolamine and free cholesterol. Dimerization of fatty acid results in false identification of fatty acid ester of hydroxyl fatty acid. Realizing these false positives is able to improve confidence of results acquired from untargeted analysis. Besides, thresholds are established for lipids identified using LipidSearch v4.1.16 software to reduce unreliable results. Copyright © 2017 Elsevier B.V. All rights reserved.
Characterisation of false-positive observations in botanical surveys
2017-01-01
Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972
Ndase, Patrick; Celum, Connie; Kidoguchi, Lara; Ronald, Allan; Fife, Kenneth H; Bukusi, Elizabeth; Donnell, Deborah; Baeten, Jared M
2015-01-01
Rapid HIV assays are the mainstay of HIV testing globally. Delivery of effective biomedical HIV prevention strategies such as antiretroviral pre-exposure prophylaxis (PrEP) requires periodic HIV testing. Because rapid tests have high (>95%) but imperfect specificity, they are expected to generate some false positive results. We assessed the frequency of true and false positive rapid results in the Partners PrEP Study, a randomized, placebo-controlled trial of PrEP. HIV testing was performed monthly using 2 rapid tests done in parallel with HIV enzyme immunoassay (EIA) confirmation following all positive rapid tests. A total of 99,009 monthly HIV tests were performed; 98,743 (99.7%) were dual-rapid HIV negative. Of the 266 visits with ≥1 positive rapid result, 99 (37.2%) had confirmatory positive EIA results (true positives), 155 (58.3%) had negative EIA results (false positives), and 12 (4.5%) had discordant EIA results. In the active PrEP arms, over two-thirds of visits with positive rapid test results were false positive results (69.2%, 110 of 159), although false positive results occurred at <1% (110/65,945) of total visits. When HIV prevalence or incidence is low due to effective HIV prevention interventions, rapid HIV tests result in a high number of false relative to true positive results, although the absolute number of false results will be low. Program roll-out for effective interventions should plan for quality assurance of HIV testing, mechanisms for confirmatory HIV testing, and counseling strategies for persons with positive rapid test results.
Kufa, Tendesayi; Lane, Tim; Manyuchi, Albert; Singh, Beverley; Isdahl, Zachary; Osmand, Thomas; Grasso, Mike; Struthers, Helen; McIntyre, James; Chipeta, Zawadi; Puren, Adrian
2017-01-01
Abstract We describe the accuracy of serial rapid HIV testing among men who have sex with men (MSM) in South Africa and discuss the implications for HIV testing and prevention. This was a cross-sectional survey conducted at five stand-alone facilities from five provinces. Demographic, behavioral, and clinical data were collected. Dried blood spots were obtained for HIV-related testing. Participants were offered rapid HIV testing using 2 rapid diagnostic tests (RDTs) in series. In the laboratory, reference HIV testing was conducted using a third-generation enzyme immunoassay (EIA) and a fourth-generation EIA as confirmatory. Accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive, and false-negative rates were determined. Between August 2015 and July 2016, 2503 participants were enrolled. Of these, 2343 were tested by RDT on site with a further 2137 (91.2%) having definitive results on both RDT and EIA. Sensitivity, specificity, positive predictive value, negative predictive value, false-positive rates, and false-negative rates were 92.6% [95% confidence interval (95% CI) 89.6–94.8], 99.4% (95% CI 98.9–99.7), 97.4% (95% CI 95.2–98.6), 98.3% (95% CI 97.6–98.8), 0.6% (95% CI 0.3–1.1), and 7.4% (95% CI 5.2–10.4), respectively. False negatives were similar to true positives with respect to virological profiles. Overall accuracy of the RDT algorithm was high, but sensitivity was lower than expected. Post-HIV test counseling should include discussions of possible false-negative results and the need for retesting among HIV negatives. PMID:28700474
Hundhausen, T; Müller, T H
2005-08-01
The microbial detection system BacT/ALERT (bioMérieux) is widely used to monitor bacterial contamination of platelet concentrates (PCs). Recently, the manufacturer introduced polycarbonate culture bottles and a modified pH-sensitive liquid emulsion sensor as microbial growth indicator. This reconfigured assay was investigated in a routine setting. In each of eight transfusion centers, samples from 500 consecutive PCs were monitored for 1 week. For all PCs with a positive BacT/ALERT signal, retained samples and, if available, original PC containers and concomitant red blood cell concentrates were analyzed independently. Initially BacT/ALERT-positive PCs without bacterial identification in any sample were defined as false-positive. BacT/ALERT-positive PCs with bacteria in the first sample only were called potentially positive. PCs with bacteria in the first sample and the same strain in at least one additional sample were accepted as positive. Five PCs (0.13%) were positive, 9 PCs (0.23%) were potentially positive, and 35 PCs (0.9%) were false-positive. The rate of false-positive BacT/ALERT results varied substantially between centers (<0.2%-3.2%). Tracings from false-positive cultures lacked an exponential increase of the signal during incubation. Most of these false-positives were due to malfunctioning cells in various BacT/ALERT incubation units. Careful assessment of individual tracings of samples with positive signals helps to identify malfunctioning incubation units. Their early shutdown or replacement minimizes the high rate of unrectifiable product rejects attributed to false-positive alarms and avoids unnecessary concern of doctors and patients after conversion to a reconfigured BacT/ALERT assay.
Huo, Yuankai; Xu, Zhoubing; Bao, Shunxing; Bermudez, Camilo; Plassard, Andrew J.; Liu, Jiaqi; Yao, Yuang; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2018-01-01
Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
True detection limits in an experimental linearly heteroscedastic system.. Part 2
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.
MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.
MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456
Transformation-aware Exploit Generation using a HI-CFG
2013-05-16
testing has many limitations of its own: it can require significant target -specific setup to perform well; it is unlikely to trigger vulnerabilities...check fails represents a potential vulnerability, but a conservative analysis can produce false positives , so we can use exploit generation to find...warnings that correspond to true positives . We can also find potentially vulnerable instructions in the course of a manual binary- level security audit
Pulsar Search Using Supervised Machine Learning
NASA Astrophysics Data System (ADS)
Ford, John M.
2017-05-01
Pulsars are rapidly rotating neutron stars which emit a strong beam of energy through mechanisms that are not entirely clear to physicists. These very dense stars are used by astrophysicists to study many basic physical phenomena, such as the behavior of plasmas in extremely dense environments, behavior of pulsar-black hole pairs, and tests of general relativity. Many of these tasks require a large ensemble of pulsars to provide enough statistical information to answer the scientific questions posed by physicists. In order to provide more pulsars to study, there are several large-scale pulsar surveys underway, which are generating a huge backlog of unprocessed data. Searching for pulsars is a very labor-intensive process, currently requiring skilled people to examine and interpret plots of data output by analysis programs. An automated system for screening the plots will speed up the search for pulsars by a very large factor. Research to date on using machine learning and pattern recognition has not yielded a completely satisfactory system, as systems with the desired near 100% recall have false positive rates that are higher than desired, causing more manual labor in the classification of pulsars. This work proposed to research, identify, propose and develop methods to overcome the barriers to building an improved classification system with a false positive rate of less than 1% and a recall of near 100% that will be useful for the current and next generation of large pulsar surveys. The results show that it is possible to generate classifiers that perform as needed from the available training data. While a false positive rate of 1% was not reached, recall of over 99% was achieved with a false positive rate of less than 2%. Methods of mitigating the imbalanced training and test data were explored and found to be highly effective in enhancing classification accuracy.
Xu, Stanley; Newcomer, Sophia; Nelson, Jennifer; Qian, Lei; McClure, David; Pan, Yi; Zeng, Chan; Glanz, Jason
2014-05-01
The Vaccine Safety Datalink project captures electronic health record data including vaccinations and medically attended adverse events on 8.8 million enrollees annually from participating managed care organizations in the United States. While the automated vaccination data are generally of high quality, a presumptive adverse event based on diagnosis codes in automated health care data may not be true (misclassification). Consequently, analyses using automated health care data can generate false positive results, where an association between the vaccine and outcome is incorrectly identified, as well as false negative findings, where a true association or signal is missed. We developed novel conditional Poisson regression models and fixed effects models that accommodate misclassification of adverse event outcome for self-controlled case series design. We conducted simulation studies to evaluate their performance in signal detection in vaccine safety hypotheses generating (screening) studies. We also reanalyzed four previously identified signals in a recent vaccine safety study using the newly proposed models. Our simulation studies demonstrated that (i) outcome misclassification resulted in both false positive and false negative signals in screening studies; (ii) the newly proposed models reduced both the rates of false positive and false negative signals. In reanalyses of four previously identified signals using the novel statistical models, the incidence rate ratio estimates and statistical significances were similar to those using conventional models and including only medical record review confirmed cases. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A statistical model of false negative and false positive detection of phase singularities.
Jacquemet, Vincent
2017-10-01
The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin
2018-05-01
This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC = 0.65 ± 0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p < 0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
Phage display screening without repetitious selection rounds.
't Hoen, Peter A C; Jirka, Silvana M G; Ten Broeke, Bradley R; Schultes, Erik A; Aguilera, Begoña; Pang, Kar Him; Heemskerk, Hans; Aartsma-Rus, Annemieke; van Ommen, Gertjan J; den Dunnen, Johan T
2012-02-15
Phage display screenings are frequently employed to identify high-affinity peptides or antibodies. Although successful, phage display is a laborious technology and is notorious for identification of false positive hits. To accelerate and improve the selection process, we have employed Illumina next generation sequencing to deeply characterize the Ph.D.-7 M13 peptide phage display library before and after several rounds of biopanning on KS483 osteoblast cells. Sequencing of the naive library after one round of amplification in bacteria identifies propagation advantage as an important source of false positive hits. Most important, our data show that deep sequencing of the phage pool after a first round of biopanning is already sufficient to identify positive phages. Whereas traditional sequencing of a limited number of clones after one or two rounds of selection is uninformative, the required additional rounds of biopanning are associated with the risk of losing promising clones propagating slower than nonbinding phages. Confocal and live cell imaging confirms that our screen successfully selected a peptide with very high binding and uptake in osteoblasts. We conclude that next generation sequencing can significantly empower phage display screenings by accelerating the finding of specific binders and restraining the number of false positive hits. Copyright © 2011 Elsevier Inc. All rights reserved.
Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population.
Dubravac, Terry; Gahan, Thomas F; Pentella, Michael A
2013-12-01
With the availability of 4th generation HIV diagnostic tests which are capable of detecting acute infection, Iowa evaluated the 3rd and 4th generation HIV test and compared the performance of these products in a low incidence population. This study was conducted to evaluate the performance of an HIV antigen/antibody combination (4th generation) assay compared to an EIA 3rd generation assay. Over a 4 month period, 2037 specimens submitted for HIV screening were tested by Bio-Rad GS HIV-1/HIV-2 Plus O EIA and the Abbott Architect i1000SR HIV Ag/Ab Combo. The performance characteristics of sensitivity, specificity, positive predictive value and negative predictive value were determined. Of the 2037 specimens tested, there were 13 (0.64%) true positives detected. None of the positive specimens were from patients in the acute phase of infection. The Abbott antigen/antibody combo assay had a sensitivity, specificity, positive-predictive value and negative predictive value of 100%, 99.85%, 81.25%, and 100% respectively. The Bio-Rad EIA assay had a sensitivity, specificity, positive-predictive value and negative predictive value of 100%, 99.80%, 76.47% and 100%, respectively. The EIA had four false positive results which tested negative by the antigen/antibody assay and western blot. In a low-incidence state where early infections are less commonly encountered, the EIA assay and the antigen/antibody assay performed with near equivalency. The antigen/antibody assay had one less false positive result. While no patients were detected in the acute stage of infection, the use of the antigen/antibody assay presents the opportunity to detect an infected patient sooner and prevent transmission to others. Copyright © 2013 Elsevier B.V. All rights reserved.
False-positive results in pharmacoepidemiology and pharmacovigilance.
Bezin, Julien; Bosco-Levy, Pauline; Pariente, Antoine
2017-09-01
False-positive constitute an important issue in scientific research. In the domain of drug evaluation, it affects all phases of drug development and assessment, from the very early preclinical studies to the late post-marketing evaluations. The core concern associated with this false-positive is the lack of replicability of the results. Aside from fraud or misconducts, false-positive is often envisioned from the statistical angle, which considers them as a price to pay for type I error in statistical testing, and its inflation in the context of multiple testing. If envisioning this problematic in the context of pharmacoepidemiology and pharmacovigilance however, that both evaluate drugs in an observational settings, information brought by statistical testing and the significance of such should only be considered as additional to the estimates provided and their confidence interval, in a context where differences have to be a clinically meaningful upon everything, and the results appear robust to the biases likely to have affected the studies. In the following article, we consequently illustrate these biases and their consequences in generating false-positive results, through studies and associations between drug use and health outcomes that have been widely disputed. Copyright © 2017 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
An Evaluation of Unit and ½ Mass Correction Approaches as a ...
Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. A preliminary evaluation indicates that instrumental tuning conditions can impact the observed M+2/M+1 ratio and in turn the false positives generated on Zn, As and Se. Both unit and ½ mass approaches will be evaluated to correct for these false positives relative to the benchmark concentrations estimates from a triple quadrupole ICP-MS using standard solutions. The impact of matrix on these M+2 corrections will be evaluated over multiple analysis days with a focus on evaluating internal standards that mirror the matrix induced shifts in the M+2 ion transmission. The goal of this evaluation is to move away from fixed M+2 corrective approaches and move towards sample specific approaches that mimic the sample matrix induced variability while attempting to address intra-day variability of the M+2 correction factors through the use of internal standards. Oral Presentation via webinar for EPA Laboratory Technical Informati
Machine-learning-based real-bogus system for the HSC-SSP moving object detection pipeline
NASA Astrophysics Data System (ADS)
Lin, Hsing-Wen; Chen, Ying-Tung; Wang, Jen-Hung; Wang, Shiang-Yu; Yoshida, Fumi; Ip, Wing-Huen; Miyazaki, Satoshi; Terai, Tsuyoshi
2018-01-01
Machine-learning techniques are widely applied in many modern optical sky surveys, e.g., Pan-STARRS1, PTF/iPTF, and the Subaru/Hyper Suprime-Cam survey, to reduce human intervention in data verification. In this study, we have established a machine-learning-based real-bogus system to reject false detections in the Subaru/Hyper-Suprime-Cam Strategic Survey Program (HSC-SSP) source catalog. Therefore, the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use stationary sources as the real training set and "flagged" data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ˜96% with a false positive rate (fpr) ˜1% or tpr ˜99% at fpr ˜5%. Therefore, we conclude that stationary sources are decent real training samples, and using photometry measurements and shape moments can reject false positives effectively.
Ribeiro, Antonio; Golicz, Agnieszka; Hackett, Christine Anne; Milne, Iain; Stephen, Gordon; Marshall, David; Flavell, Andrew J; Bayer, Micha
2015-11-11
Single Nucleotide Polymorphisms (SNPs) are widely used molecular markers, and their use has increased massively since the inception of Next Generation Sequencing (NGS) technologies, which allow detection of large numbers of SNPs at low cost. However, both NGS data and their analysis are error-prone, which can lead to the generation of false positive (FP) SNPs. We explored the relationship between FP SNPs and seven factors involved in mapping-based variant calling - quality of the reference sequence, read length, choice of mapper and variant caller, mapping stringency and filtering of SNPs by read mapping quality and read depth. This resulted in 576 possible factor level combinations. We used error- and variant-free simulated reads to ensure that every SNP found was indeed a false positive. The variation in the number of FP SNPs generated ranged from 0 to 36,621 for the 120 million base pairs (Mbp) genome. All of the experimental factors tested had statistically significant effects on the number of FP SNPs generated and there was a considerable amount of interaction between the different factors. Using a fragmented reference sequence led to a dramatic increase in the number of FP SNPs generated, as did relaxed read mapping and a lack of SNP filtering. The choice of reference assembler, mapper and variant caller also significantly affected the outcome. The effect of read length was more complex and suggests a possible interaction between mapping specificity and the potential for contributing more false positives as read length increases. The choice of tools and parameters involved in variant calling can have a dramatic effect on the number of FP SNPs produced, with particularly poor combinations of software and/or parameter settings yielding tens of thousands in this experiment. Between-factor interactions make simple recommendations difficult for a SNP discovery pipeline but the quality of the reference sequence is clearly of paramount importance. Our findings are also a stark reminder that it can be unwise to use the relaxed mismatch settings provided as defaults by some read mappers when reads are being mapped to a relatively unfinished reference sequence from e.g. a non-model organism in its early stages of genomic exploration.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Reflections on O2 as a Biosignature in Exoplanetary Atmospheres.
Meadows, Victoria S
2017-10-01
Oxygenic photosynthesis is Earth's dominant metabolism, having evolved to harvest the largest expected energy source at the surface of most terrestrial habitable zone planets. Using CO 2 and H 2 O-molecules that are expected to be abundant and widespread on habitable terrestrial planets-oxygenic photosynthesis is plausible as a significant planetary process with a global impact. Photosynthetic O 2 has long been considered particularly robust as a sign of life on a habitable exoplanet, due to the lack of known "false positives"-geological or photochemical processes that could also produce large quantities of stable O 2 . O 2 has other advantages as a biosignature, including its high abundance and uniform distribution throughout the atmospheric column and its distinct, strong absorption in the visible and near-infrared. However, recent modeling work has shown that false positives for abundant oxygen or ozone could be produced by abiotic mechanisms, including photochemistry and atmospheric escape. Environmental factors for abiotic O 2 have been identified and will improve our ability to choose optimal targets and measurements to guard against false positives. Most of these false-positive mechanisms are dependent on properties of the host star and are often strongest for planets orbiting M dwarfs. In particular, selecting planets found within the conservative habitable zone and those orbiting host stars more massive than 0.4 M ⊙ (M3V and earlier) may help avoid planets with abundant abiotic O 2 generated by water loss. Searching for O 4 or CO in the planetary spectrum, or the lack of H 2 O or CH 4 , could help discriminate between abiotic and biological sources of O 2 or O 3 . In advance of the next generation of telescopes, thorough evaluation of potential biosignatures-including likely environmental context and factors that could produce false positives-ultimately works to increase our confidence in life detection. Key Words: Biosignatures-Exoplanets-Oxygen-Photosynthesis-Planetary spectra. Astrobiology 17, 1022-1052.
Reflections on O2 as a Biosignature in Exoplanetary Atmospheres
NASA Astrophysics Data System (ADS)
Meadows, Victoria S.
2017-10-01
Oxygenic photosynthesis is Earth's dominant metabolism, having evolved to harvest the largest expected energy source at the surface of most terrestrial habitable zone planets. Using CO2 and H2O - molecules that are expected to be abundant and widespread on habitable terrestrial planets - oxygenic photosynthesis is plausible as a significant planetary process with a global impact. Photosynthetic O2 has long been considered particularly robust as a sign of life on a habitable exoplanet, due to the lack of known "false positives" - geological or photochemical processes that could also produce large quantities of stable O2. O2 has other advantages as a biosignature, including its high abundance and uniform distribution throughout the atmospheric column and its distinct, strong absorption in the visible and near-infrared. However, recent modeling work has shown that false positives for abundant oxygen or ozone could be produced by abiotic mechanisms, including photochemistry and atmospheric escape. Environmental factors for abiotic O2 have been identified and will improve our ability to choose optimal targets and measurements to guard against false positives. Most of these false-positive mechanisms are dependent on properties of the host star and are often strongest for planets orbiting M dwarfs. In particular, selecting planets found within the conservative habitable zone and those orbiting host stars more massive than 0.4 M⊙ (M3V and earlier) may help avoid planets with abundant abiotic O2 generated by water loss. Searching for O4 or CO in the planetary spectrum, or the lack of H2O or CH4, could help discriminate between abiotic and biological sources of O2 or O3. In advance of the next generation of telescopes, thorough evaluation of potential biosignatures - including likely environmental context and factors that could produce false positives - ultimately works to increase our confidence in life detection.
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
False Memories for Affective Information in Schizophrenia.
Fairfield, Beth; Altamura, Mario; Padalino, Flavia A; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola
2016-01-01
Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls ( p < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories ( p > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved.
False Memories for Affective Information in Schizophrenia
Fairfield, Beth; Altamura, Mario; Padalino, Flavia A.; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola
2016-01-01
Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls (p < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories (p > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved. PMID:27965600
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
Hardie, Diana Ruth; Korsman, Stephen N; Hsiao, Nei-Yuan; Morobadi, Molefi Daniel; Vawda, Sabeehah; Goedhals, Dominique
2017-01-01
In South Africa where the prevalence of HIV infection is very high, 4th generation HIV antibody/p24 antigen combo immunoassays are the tests of choice for laboratory based screening. Testing is usually performed in clinical pathology laboratories on automated analysers. To investigate the cause of false positive results on 4th generation HIV testing platforms in public sector laboratories, the performance of two automated platforms was compared in a clinical pathology setting, firstly on routine diagnostic specimens and secondly on known sero-negative samples. Firstly, 1181 routine diagnostic specimens were sequentially tested on Siemens and Roche automated 4th generation platforms. HIV viral load, western blot and follow up testing were used to determine the true status of inconclusive specimens. Subsequently, known HIV seronegative samples from a single donor were repeatedly tested on both platforms and an analyser was tested for surface contamination with HIV positive serum to identify how suspected specimen contamination could be occurring. Serial testing of diagnostic specimens yielded 163 weakly positive or discordant results. Only 3 of 163 were conclusively shown to indicate true HIV infection. Specimen contamination with HIV antibody was suspected, based on the following evidence: the proportion of positive specimens increased on repeated passage through the analysers; viral loads were low or undetectable and western blots negative or indeterminate on problem specimens; screen negative, 2nd test positive specimens tested positive when reanalysed on the screening assay; follow up specimens (where available) were negative. Similarly, an increasing number of known negative specimens became (repeatedly) sero-positive on serial passage through one of the analysers. Internal and external analyser surfaces were contaminated with HIV serum, evidence that sample splashes occur during testing. Due to the extreme sensitivity of these assays, contamination with minute amounts of HIV antibody can cause a negative sample to test positive. Better contamination control measures are needed on analysers used in clinical pathology environments, especially in regions where HIV sero-prevalence is high.
Theron, Grant; Venter, Rouxjeane; Smith, Liezel; Esmail, Aliasgar; Randall, Philippa; Sood, Vishesh; Oelfese, Suzette; Calligaro, Greg; Warren, Robin; Dheda, Keertan
2018-03-01
Globally, Xpert MTB/RIF (Xpert) is the most widely used PCR test for the diagnosis of tuberculosis (TB). Positive results in previously treated patients, which are due to old DNA or active disease, are a diagnostic dilemma. We prospectively retested sputum from 238 patients, irrespective of current symptoms, who were previously diagnosed to be Xpert positive and treated successfully. Patients who retested as Xpert positive and culture negative were exhaustively investigated (repeat culture, chest radiography, bronchoscopy with bronchoalveolar lavage, long-term clinical follow-up). We evaluated whether the duration since previous treatment completion, mycobacterial burden (the Xpert cycle threshold [ C T ] value), and reclassification of Xpert-positive results with a very low semiquantitation level to Xpert-negative results reduced the rate of false positivity. A total of 229/238 (96%) of patients were culture negative. Sixteen of 229 (7%) were Xpert positive a median of 11 months (interquartile range, 5 to 19 months) after treatment completion. The specificity was 93% (95% confidence interval [CI], 89 to 96%). Nine of 15 (40%) Xpert-positive, culture-negative patients reverted to Xpert negative after 2 to 3 months (1 patient declined further participation). Patients with false-positive Xpert results had a lower mycobacterial burden than patients with true-positive Xpert results ( C T , 28.7 [95% CI, 27.2 to 30.4] versus 17.6 [95% CI, 16.9 to 18.2]; P < 0.001), an increased likelihood of a chest radiograph not compatible with active TB (5/15 patients versus 0/5 patients; P = 0.026), and less-viscous sputum (15/16 patients versus 2/5 patients whose sputum was graded as mucoid or less; P = 0.038). All patients who initially retested as Xpert positive and culture negative ("Xpert false positive") were clinically well without treatment after follow-up. The duration since the previous treatment poorly predicted false-positive results (a duration of ≤2 years identified only 66% of patients with false-positive results). Reclassifying Xpert-positive results with a very low semiquantitation level to Xpert negative improved the specificity (+3% [95% CI, +2 to +5%]) but reduced the sensitivity (-10% [95% CI, -4 to -15%]). Patients with previous TB retested with Xpert can have false-positive results and thus not require treatment. These data inform clinical practice by highlighting the challenges in interpreting Xpert-positive results, underscore the need for culture, and have implications for next-generation ultrasensitive tests. Copyright © 2018 American Society for Microbiology.
Automatic detection of apical roots in oral radiographs
NASA Astrophysics Data System (ADS)
Wu, Yi; Xie, Fangfang; Yang, Jie; Cheng, Erkang; Megalooikonomou, Vasileios; Ling, Haibin
2012-03-01
The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases. In this paper we propose an automatic method for periapical root region detection by using the state-of-theart machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this problem, we boost the training set by including more root regions that are close to the annotated ones and decompose the original images to randomly generate negative samples. Based on these training samples, the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root detector. The learned detector usually generates a large amount of true and false positives. In order to reduce the number of false positives, a confidence score for each candidate detection result is calculated for further purification. We first merge the detected regions by combining tightly overlapped detected candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray images from 21 patients. The experimental results show that our approach can achieve promising detection accuracy.
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-01-01
Objectives To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Design Retrospective longitudinal study. Methods B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Primary outcome measure Positive predictive value of the preschool vision screening programme. Results Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. Conclusions The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. PMID:26614622
Combet, Emilie; Lean, Michael E J; Boyle, James G; Crozier, Alan; Davidson, D Fraser
2011-01-14
Urinary homovanillic acid (HVA) measurement is used routinely as a marker of the first test for the screening of catecholamine-secreting tumors and dopamine metabolism, but generates a large number of false-positive results. With no guidelines for dietary restrictions prior to the test, we hypothesize that consumption of flavonol-rich foods (such as onions, tomatoes, tea) prior to urinary catecholamine screening could be responsible for false-positive urinary HVA in healthy subjects. A randomized, crossover dietary intervention was carried out in healthy subjects (n=17). Volunteers followed either a low or high-flavonol diet, for a duration of 3 days, prior to providing a 24-h urine sample for HVA measurement using a routine, validated liquid chromatography method as well as a gas chromatography-mass spectrometry method. Dietary flavonol intake significantly increased urinary HVA excretion (p < 0.001), with 3 out of 17 volunteers (20%) exceeding the 40 μmol/24 h upper limit of normal for HVA excretion (false-positive result). Dietary flavonols commonly found in foodstuff such as tomatoes, onions, and tea, interfered with the routine urinary HVA screening test and should be avoided in the three-day run-up to the test. Copyright © 2010 Elsevier B.V. All rights reserved.
Marin, Stephanie J; Doyle, Kelly; Chang, Annie; Concheiro-Guisan, Marta; Huestis, Marilyn A; Johnson-Davis, Kamisha L
2016-01-01
Some amphetamine (AMP) and ecstacy (MDMA) urine immunoassay (IA) kits are prone to false-positive results due to poor specificity of the antibody. We employed two techniques, high-resolution mass spectrometry (HRMS) and an in silico structure search, to identify compounds likely to cause false-positive results. Hundred false-positive IA specimens for AMP and/or MDMA were analyzed by an Agilent 6230 time-of-flight (TOF) mass spectrometer. Separately, SciFinder (Chemical Abstracts) was used as an in silico structure search to generate a library of compounds that are known to cross-react with AMP/MDMA IAs. Chemical formulas and exact masses of 145 structures were then compared against masses identified by TOF. Compounds known to have cross-reactivity with the IAs were identified in the structure-based search. The chemical formulas and exact masses of 145 structures (of 20 chemical formulas) were compared against masses identified by TOF. Urine analysis by HRMS correlates accurate mass with chemical formulae, but provides little information regarding compound structure. Structural data of targeted antigens can be utilized to correlate HRMS-derived chemical formulas with structural analogs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter
2012-10-07
Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.
Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation
1989-08-01
pobitive, as false positives generated by a medical program can often be caught by a physician upon further testing . False negatives, however, may be...improvement over the knowledge base tested is obtained. Although our work is pretty much theoretical research oriented one example of ex- periments is...knowledge base, improves the performance by about 10%. of tests . First, we divide the cases into a training set and a validation set with 70% vs. 30% each
COMPARATIVE IN VITRO PULMONARY TOXICITY OF ENGINEERED, MANUFACTURED, AND ENVIRONMENTAL NANOPARTICLES
Engineered nanomaterials display many unique physicochemical properties for a variety of applications and due to their novel propertiesapplications may have unique routes of exposure and toxicity. This study examines the: 1) ability of the MTT assay to generate false positives or...
Star tracker operation in a high density proton field
NASA Technical Reports Server (NTRS)
Miklus, Kenneth J.; Kissh, Frank; Flynn, David J.
1993-01-01
Algorithms that reject transient signals due to proton effects on charge coupled device (CCD) sensors have been implemented in the HDOS ASTRA-l Star Trackers to be flown on the TOPEX mission scheduled for launch in July 1992. A unique technique for simulating a proton-rich environment to test trackers is described, as well as the test results obtained. Solar flares or an orbit that passes through the South Atlantic Anomaly can subject the vehicle to very high proton flux levels. There are three ways in which spurious proton generated signals can impact tracker performance: the many false signals can prevent or extend the time to acquire a star; a proton-generated signal can compromise the accuracy of the star's reported magnitude and position; and the tracked star can be lost, requiring reacquisition. Tests simulating a proton-rich environment were performed on two ASTRA-1 Star Trackers utilizing these new algorithms. There were no false acquisitions, no lost stars, and a significant reduction in reported position errors due to these improvements.
Larsen, C P; Ezligini, F; Hermansen, N O; Kjeldsen-Kragh, J
2005-02-01
Approximately 1 in every 2000 units of platelets is contaminated with bacteria. The BacT/ALERT automated blood culture system can be used to screen platelet concentrates (PCs) for bacterial contamination. Data were collected from May 1998 until May 2004. The number of PCs tested during this period was 36 896, most of which were produced from pools of four buffy-coats. On the day following blood collection or platelet apheresis, a 5-10 ml sample of the PC was aseptically transferred to a BacT/ALERT culture bottle for detection of aerobic bacteria. The sample was monitored for bacterial growth during the entire storage period of the PC (6.5 days). When a positive signal was generated, the culture bottle, the PC and the erythrocyte concentrates were tested for bacterial growth. In order to determine the frequency of false-negative BacT/ALERT signals, 1061 outdated PCs were tested during the period from May 2002 to May 2004. Eighty-eight positive signals were detected by the BacT/ALERT system, of which 12 were interpreted as truly positive. Fourteen signals were interpreted as truly false positive. Thirty-three signals were interpreted to be probably false positive. Two of 1061 outdated units tested positive, and Bacillus spp. and Staphylococcus epidermidis, respectively, were isolated from these PCs. Between 0.03% and 0.12% of the PCs were contaminated with bacteria. BacT/ALERT is an efficient tool for monitoring PCs for bacterial contamination; however, it is important to realize that false-negative results may occur.
Exoplanet Biosignatures: Understanding Oxygen as a Biosignature in the Context of Its Environment.
Meadows, Victoria S; Reinhard, Christopher T; Arney, Giada N; Parenteau, Mary N; Schwieterman, Edward W; Domagal-Goldman, Shawn D; Lincowski, Andrew P; Stapelfeldt, Karl R; Rauer, Heike; DasSarma, Shiladitya; Hegde, Siddharth; Narita, Norio; Deitrick, Russell; Lustig-Yaeger, Jacob; Lyons, Timothy W; Siegler, Nicholas; Grenfell, J Lee
2018-06-01
We describe how environmental context can help determine whether oxygen (O 2 ) detected in extrasolar planetary observations is more likely to have a biological source. Here we provide an in-depth, interdisciplinary example of O 2 biosignature identification and observation, which serves as the prototype for the development of a general framework for biosignature assessment. Photosynthetically generated O 2 is a potentially strong biosignature, and at high abundance, it was originally thought to be an unambiguous indicator for life. However, as a biosignature, O 2 faces two major challenges: (1) it was only present at high abundance for a relatively short period of Earth's history and (2) we now know of several potential planetary mechanisms that can generate abundant O 2 without life being present. Consequently, our ability to interpret both the presence and absence of O 2 in an exoplanetary spectrum relies on understanding the environmental context. Here we examine the coevolution of life with the early Earth's environment to identify how the interplay of sources and sinks may have suppressed O 2 release into the atmosphere for several billion years, producing a false negative for biologically generated O 2 . These studies suggest that planetary characteristics that may enhance false negatives should be considered when selecting targets for biosignature searches. We review the most recent knowledge of false positives for O 2 , planetary processes that may generate abundant atmospheric O 2 without a biosphere. We provide examples of how future photometric, spectroscopic, and time-dependent observations of O 2 and other aspects of the planetary environment can be used to rule out false positives and thereby increase our confidence that any observed O 2 is indeed a biosignature. These insights will guide and inform the development of future exoplanet characterization missions. Key Words: Biosignatures-Oxygenic photosynthesis-Exoplanets-Planetary atmospheres. Astrobiology 18, 630-662.
Exoplanet Biosignatures: Understanding Oxygen as a Biosignature in the Context of Its Environment
Reinhard, Christopher T.; Arney, Giada N.; Parenteau, Mary N.; Schwieterman, Edward W.; Domagal-Goldman, Shawn D.; Lincowski, Andrew P.; Stapelfeldt, Karl R.; Rauer, Heike; DasSarma, Shiladitya; Hegde, Siddharth; Narita, Norio; Deitrick, Russell; Lustig-Yaeger, Jacob; Lyons, Timothy W.; Siegler, Nicholas; Grenfell, J. Lee
2018-01-01
Abstract We describe how environmental context can help determine whether oxygen (O2) detected in extrasolar planetary observations is more likely to have a biological source. Here we provide an in-depth, interdisciplinary example of O2 biosignature identification and observation, which serves as the prototype for the development of a general framework for biosignature assessment. Photosynthetically generated O2 is a potentially strong biosignature, and at high abundance, it was originally thought to be an unambiguous indicator for life. However, as a biosignature, O2 faces two major challenges: (1) it was only present at high abundance for a relatively short period of Earth's history and (2) we now know of several potential planetary mechanisms that can generate abundant O2 without life being present. Consequently, our ability to interpret both the presence and absence of O2 in an exoplanetary spectrum relies on understanding the environmental context. Here we examine the coevolution of life with the early Earth's environment to identify how the interplay of sources and sinks may have suppressed O2 release into the atmosphere for several billion years, producing a false negative for biologically generated O2. These studies suggest that planetary characteristics that may enhance false negatives should be considered when selecting targets for biosignature searches. We review the most recent knowledge of false positives for O2, planetary processes that may generate abundant atmospheric O2 without a biosphere. We provide examples of how future photometric, spectroscopic, and time-dependent observations of O2 and other aspects of the planetary environment can be used to rule out false positives and thereby increase our confidence that any observed O2 is indeed a biosignature. These insights will guide and inform the development of future exoplanet characterization missions. Key Words: Biosignatures—Oxygenic photosynthesis—Exoplanets—Planetary atmospheres. Astrobiology 18, 630–662. PMID:29746149
False-Positive Rate of AKI Using Consensus Creatinine-Based Criteria.
Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G S; Negoianu, Dan; Testani, Jeffrey M; Berns, Jeffrey S; Parikh, Chirag R; Wilson, F Perry
2015-10-07
Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false-positive rates because of inherent laboratory and biologic variabilities of creatinine. We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false-positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient's true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false-positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%-8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false-positive AKI diagnosis rate of 30.5% (interquartile range =30.1%-30.9%) versus 2.0% (interquartile range =1.9%-2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Use of small serum creatinine changes to diagnose AKI is limited by high false-positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. Copyright © 2015 by the American Society of Nephrology.
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-11-27
To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Retrospective longitudinal study. B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Positive predictive value of the preschool vision screening programme. Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Effects of ecstasy/polydrug use on memory for associative information.
Gallagher, Denis T; Fisk, John E; Montgomery, Catharine; Judge, Jeannie; Robinson, Sarita J; Taylor, Paul J
2012-08-01
Associative learning underpins behaviours that are fundamental to the everyday functioning of the individual. Evidence pointing to learning deficits in recreational drug users merits further examination. A word pair learning task was administered to examine associative learning processes in ecstasy/polydrug users. After assignment to either single or divided attention conditions, 44 ecstasy/polydrug users and 48 non-users were presented with 80 word pairs at encoding. Following this, four types of stimuli were presented at the recognition phase: the words as originally paired (old pairs), previously presented words in different pairings (conjunction pairs), old words paired with new words, and pairs of new words (not presented previously). The task was to identify which of the stimuli were intact old pairs. Ecstasy/ploydrug users produced significantly more false-positive responses overall compared to non-users. Increased long-term frequency of ecstasy use was positively associated with the propensity to produce false-positive responses. It was also associated with a more liberal signal detection theory decision criterion value. Measures of long term and recent cannabis use were also associated with these same word pair learning outcome measures. Conjunction word pairs, irrespective of drug use, generated the highest level of false-positive responses and significantly more false-positive responses were made in the divided attention condition compared to the single attention condition. Overall, the results suggest that long-term ecstasy exposure may induce a deficit in associative learning and this may be in part a consequence of users adopting a more liberal decision criterion value.
BlackOPs: increasing confidence in variant detection through mappability filtering.
Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil
2013-10-01
Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.
Reflections on O2 as a Biosignature in Exoplanetary Atmospheres
2017-01-01
Abstract Oxygenic photosynthesis is Earth's dominant metabolism, having evolved to harvest the largest expected energy source at the surface of most terrestrial habitable zone planets. Using CO2 and H2O—molecules that are expected to be abundant and widespread on habitable terrestrial planets—oxygenic photosynthesis is plausible as a significant planetary process with a global impact. Photosynthetic O2 has long been considered particularly robust as a sign of life on a habitable exoplanet, due to the lack of known “false positives”—geological or photochemical processes that could also produce large quantities of stable O2. O2 has other advantages as a biosignature, including its high abundance and uniform distribution throughout the atmospheric column and its distinct, strong absorption in the visible and near-infrared. However, recent modeling work has shown that false positives for abundant oxygen or ozone could be produced by abiotic mechanisms, including photochemistry and atmospheric escape. Environmental factors for abiotic O2 have been identified and will improve our ability to choose optimal targets and measurements to guard against false positives. Most of these false-positive mechanisms are dependent on properties of the host star and are often strongest for planets orbiting M dwarfs. In particular, selecting planets found within the conservative habitable zone and those orbiting host stars more massive than 0.4 M⊙ (M3V and earlier) may help avoid planets with abundant abiotic O2 generated by water loss. Searching for O4 or CO in the planetary spectrum, or the lack of H2O or CH4, could help discriminate between abiotic and biological sources of O2 or O3. In advance of the next generation of telescopes, thorough evaluation of potential biosignatures—including likely environmental context and factors that could produce false positives—ultimately works to increase our confidence in life detection. Key Words: Biosignatures—Exoplanets—Oxygen—Photosynthesis—Planetary spectra. Astrobiology 17, 1022–1052. PMID:28443722
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
CNNdel: Calling Structural Variations on Low Coverage Data Based on Convolutional Neural Networks
2017-01-01
Many structural variations (SVs) detection methods have been proposed due to the popularization of next-generation sequencing (NGS). These SV calling methods use different SV-property-dependent features; however, they all suffer from poor accuracy when running on low coverage sequences. The union of results from these tools achieves fairly high sensitivity but still produces low accuracy on low coverage sequence data. That is, these methods contain many false positives. In this paper, we present CNNdel, an approach for calling deletions from paired-end reads. CNNdel gathers SV candidates reported by multiple tools and then extracts features from aligned BAM files at the positions of candidates. With labeled feature-expressed candidates as a training set, CNNdel trains convolutional neural networks (CNNs) to distinguish true unlabeled candidates from false ones. Results show that CNNdel works well with NGS reads from 26 low coverage genomes of the 1000 Genomes Project. The paper demonstrates that convolutional neural networks can automatically assign the priority of SV features and reduce the false positives efficaciously. PMID:28630866
Accurate indel prediction using paired-end short reads
2013-01-01
Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375
Detecting false positive sequence homology: a machine learning approach.
Fujimoto, M Stanley; Suvorov, Anton; Jensen, Nicholas O; Clement, Mark J; Bybee, Seth M
2016-02-24
Accurate detection of homologous relationships of biological sequences (DNA or amino acid) amongst organisms is an important and often difficult task that is essential to various evolutionary studies, ranging from building phylogenies to predicting functional gene annotations. There are many existing heuristic tools, most commonly based on bidirectional BLAST searches that are used to identify homologous genes and combine them into two fundamentally distinct classes: orthologs and paralogs. Due to only using heuristic filtering based on significance score cutoffs and having no cluster post-processing tools available, these methods can often produce multiple clusters constituting unrelated (non-homologous) sequences. Therefore sequencing data extracted from incomplete genome/transcriptome assemblies originated from low coverage sequencing or produced by de novo processes without a reference genome are susceptible to high false positive rates of homology detection. In this paper we develop biologically informative features that can be extracted from multiple sequence alignments of putative homologous genes (orthologs and paralogs) and further utilized in context of guided experimentation to verify false positive outcomes. We demonstrate that our machine learning method trained on both known homology clusters obtained from OrthoDB and randomly generated sequence alignments (non-homologs), successfully determines apparent false positives inferred by heuristic algorithms especially among proteomes recovered from low-coverage RNA-seq data. Almost ~42 % and ~25 % of predicted putative homologies by InParanoid and HaMStR respectively were classified as false positives on experimental data set. Our process increases the quality of output from other clustering algorithms by providing a novel post-processing method that is both fast and efficient at removing low quality clusters of putative homologous genes recovered by heuristic-based approaches.
An Evaluation of Unit and ½ Mass Correction Approaches as a ...
Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. Correction equations can be formulated using either a unit or ½ mass approach. The ½ mass correction approach does not suffer from the bias generated from polyatomic or end user based contamination at the unit mass but is limited by the abundance sensitivity of the adjacent mass. For instance, the use of m/z 78 in a unit mass correction of 156Gd on m/z 78 can be biased by residual 40Ar38Ar and 78Se while the ½ mass approach can use 77.5 or 78.5 and is limited by the abundance sensitivity issues from mass 77 and 78 or 78 and 79, respectively. This presentation will evaluate the use of both unit and ½ mass correction approaches as a means of addressing M+2 false positives within the context of updating US EPA Method 200.8. This evaluation will include the analysis of As and Se standards near the detection limit in the presence of low (2ppb) and high (50ppb) levels of REE with benchmark concentrations estimated using
Affected sib pair tests in inbred populations.
Liu, W; Weir, B S
2004-11-01
The affected-sib-pair (ASP) method for detecting linkage between a disease locus and marker loci was first established 50 years ago, and since then numerous modifications have been made. We modify two identity-by-state (IBS) test statistics of Lange (Lange, 1986a, 1986b) to allow for inbreeding in the population. We evaluate the power and false positive rates of the modified tests under three disease models, using simulated data. Before estimating false positive rates, we demonstrate that IBS tests are tests of both linkage and linkage disequilibrium between marker and disease loci. Therefore, the null hypothesis of IBS tests should be no linkage and no LD. When the population inbreeding coefficient is large, the false positive rates of Lange's tests become much larger than the nominal value, while those of our modified tests remain close to the nominal value. To estimate power with a controlled false positive rate, we choose the cutoff values based on simulated datasets under the null hypothesis, so that both Lange's tests and the modified tests generate same false positive rate. The powers of Lange's z-test and our modified z-test are very close and do not change much with increasing inbreeding. The power of the modified chi-square test also stays stable when the inbreeding coefficient increases. However, the power of Lange's chi-square test increases with increasing inbreeding, and is larger than that of our modified chi-square test for large inbreeding coefficients. The power is high under a recessive disease model for both Lange's tests and the modified tests, though the power is low for additive and dominant disease models. Allowing for inbreeding is therefore appropriate, at least for diseases known to be recessive.
HangOut: generating clean PSI-BLAST profiles for domains with long insertions.
Kim, Bong-Hyun; Cong, Qian; Grishin, Nick V
2010-06-15
Profile-based similarity search is an essential step in structure-function studies of proteins. However, inclusion of non-homologous sequence segments into a profile causes its corruption and results in false positives. Profile corruption is common in multidomain proteins, and single domains with long insertions are a significant source of errors. We developed a procedure (HangOut) that, for a single domain with specified insertion position, cleans erroneously extended PSI-BLAST alignments to generate better profiles. HangOut is implemented in Python 2.3 and runs on all Unix-compatible platforms. The source code is available under the GNU GPL license at http://prodata.swmed.edu/HangOut/. Supplementary data are available at Bioinformatics online.
Cliquet, F; McElhinney, L M; Servat, A; Boucher, J M; Lowings, J P; Goddard, T; Mansfield, K L; Fooks, A R
2004-04-01
A protocol suitable for the detection of rabies virus-specific antibodies in serum samples from companion animals using an enzyme linked immunosorbent assay (ELISA) is described. This method has been used successfully for the qualitative assessment of rabies virus-specific antibodies in serum samples from a cohort of vaccinated dogs and cats. In two initial field studies, a variable population of field samples from the Veterinary Laboratories Agency (VLA), United Kingdom were tested. In the first study (n = 1000), the number of false-positive and false-negative results was 11 samples (1.1%) and 67 samples (6.7%), respectively. In the second study (n = 920), the number of false-positive and false-negative results was 7 samples (0.8%) and 52 samples (5.7%). In a third study, undertaken at l'Agence Française de Sécurité Sanitaire des Aliments (AFSSA), Nancy, France (n = 440), 1 false-positive sample (0.23%) and 91 (20.7%) false-negative samples were identified. Data generated using this prototype ELISA indicate a strong correlation for specificity when compared to the gold standard fluorescent antibody virus neutralisation (FAVN) test. Although the ELISA has a lower sensitivity than the FAVN test, it is a useful tool for rapidly screening serum samples from vaccinated companion animals. Using a cut-off value of 0.6 EU/ml, the sensitivity (R = % from VLA and 79% from AFSSA) and specificity (R = 97.3%) indices between the ELISA compared favourably with data generated using the FAVN test. The major advantages of the ELISA test are that it is a qualitative tool that can be completed in four hours, does not require the use of live virus and can be performed without the need for specialised laboratory containment. This contrasts with 4 days using conventional rabies antibody virus neutralisation assays. Using the current format, the ELISA assay described would be a valuable screening tool for the detection of rabies antibodies from vaccinated domestic animals in combination with other Office International des Epizooties (OIE) accepted serological tests.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
NASA Technical Reports Server (NTRS)
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
Almannai, Mohammed; Marom, Ronit; Sutton, V Reid
2016-12-01
The purpose of this review is to summarize the development and recent advancements of newborn screening. Early initiation of medical care has modified the outcome for many disorders that were previously associated with high morbidity (such as cystic fibrosis, primary immune deficiencies, and inborn errors of metabolism) or with significant neurodevelopmental disabilities (such as phenylketonuria and congenital hypothyroidism). The new era of mass spectrometry and next generation sequencing enables the expansion of the newborn screen panel, and will help to address technical issues such as turnaround time, and decreasing false-positive and false-negative rates for the testing. The newborn screening program is a successful public health initiative that facilitates early diagnosis of treatable disorders to reduce long-term morbidity and mortality.
Olson, Nathan D; Zook, Justin M; Morrow, Jayne B; Lin, Nancy J
2017-01-01
High sensitivity methods such as next generation sequencing and polymerase chain reaction (PCR) are adversely impacted by organismal and DNA contaminants. Current methods for detecting contaminants in microbial materials (genomic DNA and cultures) are not sensitive enough and require either a known or culturable contaminant. Whole genome sequencing (WGS) is a promising approach for detecting contaminants due to its sensitivity and lack of need for a priori assumptions about the contaminant. Prior to applying WGS, we must first understand its limitations for detecting contaminants and potential for false positives. Herein we demonstrate and characterize a WGS-based approach to detect organismal contaminants using an existing metagenomic taxonomic classification algorithm. Simulated WGS datasets from ten genera as individuals and binary mixtures of eight organisms at varying ratios were analyzed to evaluate the role of contaminant concentration and taxonomy on detection. For the individual genomes the false positive contaminants reported depended on the genus, with Staphylococcus , Escherichia , and Shigella having the highest proportion of false positives. For nearly all binary mixtures the contaminant was detected in the in-silico datasets at the equivalent of 1 in 1,000 cells, though F. tularensis was not detected in any of the simulated contaminant mixtures and Y. pestis was only detected at the equivalent of one in 10 cells. Once a WGS method for detecting contaminants is characterized, it can be applied to evaluate microbial material purity, in efforts to ensure that contaminants are characterized in microbial materials used to validate pathogen detection assays, generate genome assemblies for database submission, and benchmark sequencing methods.
A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery
NASA Astrophysics Data System (ADS)
Rogers, Thomas W.; Jaccard, Nicolas; Griffin, Lewis D.
2017-05-01
Previously, we investigated the use of Convolutional Neural Networks (CNNs) to detect so-called Small Metallic Threats (SMTs) hidden amongst legitimate goods inside a cargo container. We trained a CNN from scratch on data produced by a Threat Image Projection (TIP) framework that generates images with realistic variation to robustify performance. The system achieved 90% detection of containers that contained a single SMT, while raising 6% false positives on benign containers. The best CNN architecture used the raw high energy image (single-energy) and its logarithm as input channels. Use of the logarithm improved performance, thus echoing studies on human operator performance. However, it is an unexpected result with CNNs. In this work, we (i) investigate methods to exploit material information captured in dual-energy images, and (ii) introduce a new CNN training scheme that generates `spot-the-difference' benign and threat pairs on-the-fly. To the best of our knowledge, this is the first time that CNNs have been applied directly to raw dual-energy X-ray imagery, in any field. To exploit dual-energy, we experiment with adapting several physics-derived approaches to material discrimination from the cargo literature, and introduce three novel variants. We hypothesise that CNNs can implicitly learn about the material characteristics of objects from the raw dual-energy images, and use this to suppress false positives. The best performing method is able to detect 95% of containers containing a single SMT, while raising 0.4% false positives on benign containers. This is a step change improvement in performance over our prior work
Cai, Jing; Yan, Beizhan; Kinney, Patrick L.; Perzanowski, Matthew S.; Jung, Kyung-Hwa; Li, Tiantian; Xiu, Guangli; Zhang, Danian; Olivo, Cosette; Ross, James; Miller, Rachel L.; Chillrud, Steven N.
2014-01-01
Exposure to ambient black carbon (BC) is associated with adverse health effects. Black carbon levels display large spatial and temporal variability in many settings, such as cities and rural households where fossil fuel and biomass, respectively, are commonly burned for transportation, heat and cooking. This paper addresses the optimization of the miniaturized personal BC monitor, the microAeth® for use in epidemiology studies. To address false positive and negative peaks in real time BC concentrations resulting from changes in temperature and humidity, an inlet with a diffusion drier was developed. In addition, we developed data cleaning algorithms to address occasional false positive and negative fluctuations in BC readings related to physical vibration, due in part to both dirt accumulations in the optical inserts and degraded components. These methods were successfully used to process real-time BC data generated from a cohort of 9-10 year old children (N= 54) in NYC, who wore 1 or 2 microAeth units for six 24hr time periods. Two hour and daily BC averages after data cleaning were consistent with averaged raw data (slopes near 1 with R =0.99, p<0.001; R= 0.95, p<0.001, respectively), strongly suggesting that the false positive and negative excursions balance each other out when averaged for at least 2 hrs. Data cleaning of identified suspect events allows more confidence in the interpretation of the real-time personal monitoring data generated in environmental exposure studies, with mean percent difference <10% for 19 duplicate deployments. PMID:25558122
Automatic detection of ECG cable interchange by analyzing both morphology and interlead relations.
Han, Chengzong; Gregg, Richard E; Feild, Dirk Q; Babaeizadeh, Saeed
2014-01-01
ECG cable interchange can generate erroneous diagnoses. For algorithms detecting ECG cable interchange, high specificity is required to maintain a low total false positive rate because the prevalence of interchange is low. In this study, we propose and evaluate an improved algorithm for automatic detection and classification of ECG cable interchange. The algorithm was developed by using both ECG morphology information and redundancy information. ECG morphology features included QRS-T and P-wave amplitude, frontal axis and clockwise vector loop rotation. The redundancy features were derived based on the EASI™ lead system transformation. The classification was implemented using linear support vector machine. The development database came from multiple sources including both normal subjects and cardiac patients. An independent database was used to test the algorithm performance. Common cable interchanges were simulated by swapping either limb cables or precordial cables. For the whole validation database, the overall sensitivity and specificity for detecting precordial cable interchange were 56.5% and 99.9%, and the sensitivity and specificity for detecting limb cable interchange (excluding left arm-left leg interchange) were 93.8% and 99.9%. Defining precordial cable interchange or limb cable interchange as a single positive event, the total false positive rate was 0.7%. When the algorithm was designed for higher sensitivity, the sensitivity for detecting precordial cable interchange increased to 74.6% and the total false positive rate increased to 2.7%, while the sensitivity for detecting limb cable interchange was maintained at 93.8%. The low total false positive rate was maintained at 0.6% for the more abnormal subset of the validation database including only hypertrophy and infarction patients. The proposed algorithm can detect and classify ECG cable interchanges with high specificity and low total false positive rate, at the cost of decreased sensitivity for certain precordial cable interchanges. The algorithm could also be configured for higher sensitivity for different applications where a lower specificity can be tolerated. Copyright © 2014 Elsevier Inc. All rights reserved.
Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.
Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko
2018-05-04
Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.
A New Standard for Assessing the Performance of High Contrast Imaging Systems
NASA Astrophysics Data System (ADS)
Jensen-Clem, Rebecca; Mawet, Dimitri; Gomez Gonzalez, Carlos A.; Absil, Olivier; Belikov, Ruslan; Currie, Thayne; Kenworthy, Matthew A.; Marois, Christian; Mazoyer, Johan; Ruane, Garreth; Tanner, Angelle; Cantalloube, Faustine
2018-01-01
As planning for the next generation of high contrast imaging instruments (e.g., WFIRST, HabEx, and LUVOIR, TMT-PFI, EELT-EPICS) matures and second-generation ground-based extreme adaptive optics facilities (e.g., VLT-SPHERE, Gemini-GPI) finish their principal surveys, it is imperative that the performance of different designs, post-processing algorithms, observing strategies, and survey results be compared in a consistent, statistically robust framework. In this paper, we argue that the current industry standard for such comparisons—the contrast curve—falls short of this mandate. We propose a new figure of merit, the “performance map,” that incorporates three fundamental concepts in signal detection theory: the true positive fraction, the false positive fraction, and the detection threshold. By supplying a theoretical basis and recipe for generating the performance map, we hope to encourage the widespread adoption of this new metric across subfields in exoplanet imaging.
Ma, Michelle; Rice, Tyler A; Percopo, Caroline M; Rosenberg, Helene F
2017-01-01
The silkworm larvae plasma (SLP) assay has been developed as a means to detect bacterial peptidoglycan as a surrogate for live bacteria. Here, we present results that indicate that generation of melanin by this assay is not fully reliable as a surrogate marker for bacterial count. Published by Elsevier B.V.
Extensive testing or focused testing of patients with elevated liver enzymes.
Tapper, Elliot B; Saini, Sameer D; Sengupta, Neil
2017-02-01
Many patients have elevated serum aminotransferases reflecting many underlying conditions, both common and rare. Clinicians generally apply one of two evaluative strategies: testing for all diseases at once (extensive) or just common diseases first (focused). We simulated the evaluation of 10,000 adult outpatients with elevated with alanine aminotransferase to compare both testing strategies. Model inputs employed population-based data from the US (National Health and Nutrition Examination Survey) and Britain (Birmingham and Lambeth Liver Evaluation Testing Strategies). Patients were followed until a diagnosis was provided or a diagnostic liver biopsy was considered. The primary outcome was US dollars per diagnosis. Secondary outcomes included doctor visits per diagnosis, false-positives per diagnosis and confirmatory liver biopsies ordered. The extensive testing strategy required the lowest monetary cost, yielding diagnoses for 54% of patients at $448/patient compared to 53% for $502 under the focused strategy. The extensive strategy also required fewer doctor visits (1.35 vs. 1.61 visits/patient). However, the focused strategy generated fewer false-positives (0.1 vs. 0.19/patient) and more biopsies (0.04 vs. 0.08/patient). Focused testing becomes the most cost-effective strategy when accounting for pre-test probabilities and prior evaluations performed. This includes when the respective prevalence of alcoholic, non-alcoholic and drug-induced liver disease exceeds 51.1%, 53.0% and 13.0%. Focused testing is also the most cost-effective strategy in the referral setting where assessments for viral hepatitis, alcoholic and non-alcoholic fatty liver disease have already been performed. Testing for elevated liver enzymes should be deliberate and focused to account for pre-test probabilities if possible. Many patients have elevated liver enzymes reflecting one of many possible liver diseases, some of which are very common and some of which are rare. Tests are widely available for most causes but it is unclear whether clinicians should order them all at once or direct testing based on how likely a given disease may be given the patient's history and physical exam. The tradeoffs of both approaches involve the money spent on testing, number of office visits needed, and false positive results generated. This study shows that if there are no clues available at the time of evaluation, testing all at once saves time and money while causing more false positives. However, if there are strong clues regarding the likelihood of a particular disease, limited testing saves time, money and prevents false positives. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2017-09-01
Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Lantieri, Francesca; Malacarne, Michela; Gimelli, Stefania; Santamaria, Giuseppe; Coviello, Domenico; Ceccherini, Isabella
2017-01-01
The presence of false positive and false negative results in the Array Comparative Genomic Hybridization (aCGH) design is poorly addressed in literature reports. We took advantage of a custom aCGH recently carried out to analyze its design performance, the use of several Agilent aberrations detection algorithms, and the presence of false results. Our study provides a confirmation that the high density design does not generate more noise than standard designs and, might reach a good resolution. We noticed a not negligible presence of false negative and false positive results in the imbalances call performed by the Agilent software. The Aberration Detection Method 2 (ADM-2) algorithm with a threshold of 6 performed quite well, and the array design proved to be reliable, provided that some additional filters are applied, such as considering only intervals with average absolute log2ratio above 0.3. We also propose an additional filter that takes into account the proportion of probes with log2ratio exceeding suggestive values for gain or loss. In addition, the quality of samples was confirmed to be a crucial parameter. Finally, this work raises the importance of evaluating the samples profiles by eye and the necessity of validating the imbalances detected. PMID:28287439
Wang, Li-jun; Lu, Xin-xin; Wu, Wei; Sui, Wen-jun; Zhang, Gui
2014-01-01
In order to evaluate a rapid matrix-assisted laser desorption ionization-time of flight mass spectrometry (MAIDI-TOF MS) assay in screening vancomycin-resistant Enterococcus faecium, a total of 150 E. faecium clinical strains were studied, including 60 vancomycin-resistant E. faecium (VREF) isolates and 90 vancomycin-susceptible (VSEF) strains. Vancomycin resistance genes were detected by sequencing. E. faecium were identified by MALDI-TOF MS. A genetic algorithm model with ClinProTools software was generated using spectra of 30 VREF isolates and 30 VSEF isolates. Using this model, 90 test isolates were discriminated between VREF and VSEF. The results showed that all sixty VREF isolates carried the vanA gene. The performance of VREF detection by the genetic algorithm model of MALDI-TOF MS compared to the sequencing method was sensitivity = 80%, specificity = 90%, false positive rate =10%, false negative rate =10%, positive predictive value = 80%, negative predictive value= 90%. MALDI-TOF MS can be used as a screening test for discrimination between vanA-positive E. faecium and vanA-negative E. faecium.
Zook, Justin M.; Morrow, Jayne B.; Lin, Nancy J.
2017-01-01
High sensitivity methods such as next generation sequencing and polymerase chain reaction (PCR) are adversely impacted by organismal and DNA contaminants. Current methods for detecting contaminants in microbial materials (genomic DNA and cultures) are not sensitive enough and require either a known or culturable contaminant. Whole genome sequencing (WGS) is a promising approach for detecting contaminants due to its sensitivity and lack of need for a priori assumptions about the contaminant. Prior to applying WGS, we must first understand its limitations for detecting contaminants and potential for false positives. Herein we demonstrate and characterize a WGS-based approach to detect organismal contaminants using an existing metagenomic taxonomic classification algorithm. Simulated WGS datasets from ten genera as individuals and binary mixtures of eight organisms at varying ratios were analyzed to evaluate the role of contaminant concentration and taxonomy on detection. For the individual genomes the false positive contaminants reported depended on the genus, with Staphylococcus, Escherichia, and Shigella having the highest proportion of false positives. For nearly all binary mixtures the contaminant was detected in the in-silico datasets at the equivalent of 1 in 1,000 cells, though F. tularensis was not detected in any of the simulated contaminant mixtures and Y. pestis was only detected at the equivalent of one in 10 cells. Once a WGS method for detecting contaminants is characterized, it can be applied to evaluate microbial material purity, in efforts to ensure that contaminants are characterized in microbial materials used to validate pathogen detection assays, generate genome assemblies for database submission, and benchmark sequencing methods. PMID:28924496
NASA Astrophysics Data System (ADS)
Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.
McCoy, Gary R; Touzet, Nicolas; Fleming, Gerard T A; Raine, Robin
2015-07-01
The toxic microalgal species Prymnesium parvum and Prymnesium polylepis are responsible for numerous fish kills causing economic stress on the aquaculture industry and, through the consumption of contaminated shellfish, can potentially impact on human health. Monitoring of toxic phytoplankton is traditionally carried out by light microscopy. However, molecular methods of identification and quantification are becoming more common place. This study documents the optimisation of the novel Microarrays for the Detection of Toxic Algae (MIDTAL) microarray from its initial stages to the final commercial version now available from Microbia Environnement (France). Existing oligonucleotide probes used in whole-cell fluorescent in situ hybridisation (FISH) for Prymnesium species from higher group probes to species-level probes were adapted and tested on the first-generation microarray. The combination and interaction of numerous other probes specific for a whole range of phytoplankton taxa also spotted on the chip surface caused high cross reactivity, resulting in false-positive results on the microarray. The probe sequences were extended for the subsequent second-generation microarray, and further adaptations of the hybridisation protocol and incubation temperatures significantly reduced false-positive readings from the first to the second-generation chip, thereby increasing the specificity of the MIDTAL microarray. Additional refinement of the subsequent third-generation microarray protocols with the addition of a poly-T amino linker to the 5' end of each probe further enhanced the microarray performance but also highlighted the importance of optimising RNA labelling efficiency when testing with natural seawater samples from Killary Harbour, Ireland.
Minkler, Paul E; Stoll, Maria S K; Ingalls, Stephen T; Hoppel, Charles L
2017-04-01
While selectively quantifying acylcarnitines in thousands of patient samples using UHPLC-MS/MS, we have occasionally observed unidentified branched-chain C8 acylcarnitines. Such observations are not possible using tandem MS methods, which generate pseudo-quantitative acylcarnitine "profiles". Since these "profiles" select for mass alone, they cannot distinguish authentic signal from isobaric and isomeric interferences. For example, some of the samples containing branched-chain C8 acylcarnitines were, in fact, expanded newborn screening false positive "profiles" for medium-chain acyl-CoA dehydrogenase deficiency (MCADD). Using our fast, highly selective, and quantitatively accurate UHPLC-MS/MS acylcarnitine determination method, we corrected the false positive tandem MS results and reported the sample results as normal for octanoylcarnitine (the marker for MCADD). From instances such as these, we decided to further investigate the presence of branched-chain C8 acylcarnitines in patient samples. To accomplish this, we synthesized and chromatographically characterized several branched-chain C8 acylcarnitines (in addition to valproylcarnitine): 2-methylheptanoylcarnitine, 6-methylheptanoylcarnitine, 2,2-dimethylhexanoylcarnitine, 3,3-dimethylhexanoylcarnitine, 3,5-dimethylhexanoylcarnitine, 2-ethylhexanoylcarnitine, and 2,4,4-trimethylpentanoylcarnitine. We then compared their behavior with branched-chain C8 acylcarnitines observed in patient samples and demonstrated our ability to chromographically resolve, and thus distinguish, octanoylcarnitine from branched-chain C8 acylcarnitines, correcting false positive MCADD results from expanded newborn screening. Copyright © 2017 Elsevier Inc. All rights reserved.
Sequence and structural analyses of nuclear export signals in the NESdb database
Xu, Darui; Farmer, Alicia; Collett, Garen; Grishin, Nick V.; Chook, Yuh Min
2012-01-01
We compiled >200 nuclear export signal (NES)–containing CRM1 cargoes in a database named NESdb. We analyzed the sequences and three-dimensional structures of natural, experimentally identified NESs and of false-positive NESs that were generated from the database in order to identify properties that might distinguish the two groups of sequences. Analyses of amino acid frequencies, sequence logos, and agreement with existing NES consensus sequences revealed strong preferences for the Φ1-X3-Φ2-X2-Φ3-X-Φ4 pattern and for negatively charged amino acids in the nonhydrophobic positions of experimentally identified NESs but not of false positives. Strong preferences against certain hydrophobic amino acids in the hydrophobic positions were also revealed. These findings led to a new and more precise NES consensus. More important, three-dimensional structures are now available for 68 NESs within 56 different cargo proteins. Analyses of these structures showed that experimentally identified NESs are more likely than the false positives to adopt α-helical conformations that transition to loops at their C-termini and more likely to be surface accessible within their protein domains or be present in disordered or unobserved parts of the structures. Such distinguishing features for real NESs might be useful in future NES prediction efforts. Finally, we also tested CRM1-binding of 40 NESs that were found in the 56 structures. We found that 16 of the NES peptides did not bind CRM1, hence illustrating how NESs are easily misidentified. PMID:22833565
Robust Detection of Rare Species Using Environmental DNA: The Importance of Primer Specificity
Wilcox, Taylor M.; McKelvey, Kevin S.; Young, Michael K.; Jane, Stephen F.; Lowe, Winsor H.; Whiteley, Andrew R.; Schwartz, Michael K.
2013-01-01
Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method’s sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design. PMID:23555689
Robust detection of rare species using environmental DNA: the importance of primer specificity.
Wilcox, Taylor M; McKelvey, Kevin S; Young, Michael K; Jane, Stephen F; Lowe, Winsor H; Whiteley, Andrew R; Schwartz, Michael K
2013-01-01
Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method's sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design.
False Position, Double False Position and Cramer's Rule
ERIC Educational Resources Information Center
Boman, Eugene
2009-01-01
We state and prove the methods of False Position (Regula Falsa) and Double False Position (Regula Duorum Falsorum). The history of both is traced from ancient Egypt and China through the work of Fibonacci, ending with a connection between Double False Position and Cramer's Rule.
A statistical method for the detection of variants from next-generation resequencing of DNA pools.
Bansal, Vikas
2010-06-15
Next-generation sequencing technologies have enabled the sequencing of several human genomes in their entirety. However, the routine resequencing of complete genomes remains infeasible. The massive capacity of next-generation sequencers can be harnessed for sequencing specific genomic regions in hundreds to thousands of individuals. Sequencing-based association studies are currently limited by the low level of multiplexing offered by sequencing platforms. Pooled sequencing represents a cost-effective approach for studying rare variants in large populations. To utilize the power of DNA pooling, it is important to accurately identify sequence variants from pooled sequencing data. Detection of rare variants from pooled sequencing represents a different challenge than detection of variants from individual sequencing. We describe a novel statistical approach, CRISP [Comprehensive Read analysis for Identification of Single Nucleotide Polymorphisms (SNPs) from Pooled sequencing] that is able to identify both rare and common variants by using two approaches: (i) comparing the distribution of allele counts across multiple pools using contingency tables and (ii) evaluating the probability of observing multiple non-reference base calls due to sequencing errors alone. Information about the distribution of reads between the forward and reverse strands and the size of the pools is also incorporated within this framework to filter out false variants. Validation of CRISP on two separate pooled sequencing datasets generated using the Illumina Genome Analyzer demonstrates that it can detect 80-85% of SNPs identified using individual sequencing while achieving a low false discovery rate (3-5%). Comparison with previous methods for pooled SNP detection demonstrates the significantly lower false positive and false negative rates for CRISP. Implementation of this method is available at http://polymorphism.scripps.edu/~vbansal/software/CRISP/.
Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna
2009-10-15
Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the same. Again, when an existing method (NBmiRTar) is executed with the our proposed negative data, we observe an improvement in its performance. These clearly establish the effectiveness of the proposed approach of selecting the negative examples systematically. TargetMiner is now available as an online tool at www.isical.ac.in/ approximately bioinfo_miu
Yan, Liying; Huang, Lei; Xu, Liya; Huang, Jin; Ma, Fei; Zhu, Xiaohui; Tang, Yaqiong; Liu, Mingshan; Lian, Ying; Liu, Ping; Li, Rong; Lu, Sijia; Tang, Fuchou; Qiao, Jie; Xie, X Sunney
2015-12-29
In vitro fertilization (IVF), preimplantation genetic diagnosis (PGD), and preimplantation genetic screening (PGS) help patients to select embryos free of monogenic diseases and aneuploidy (chromosome abnormality). Next-generation sequencing (NGS) methods, while experiencing a rapid cost reduction, have improved the precision of PGD/PGS. However, the precision of PGD has been limited by the false-positive and false-negative single-nucleotide variations (SNVs), which are not acceptable in IVF and can be circumvented by linkage analyses, such as short tandem repeats or karyomapping. It is noteworthy that existing methods of detecting SNV/copy number variation (CNV) and linkage analysis often require separate procedures for the same embryo. Here we report an NGS-based PGD/PGS procedure that can simultaneously detect a single-gene disorder and aneuploidy and is capable of linkage analysis in a cost-effective way. This method, called "mutated allele revealed by sequencing with aneuploidy and linkage analyses" (MARSALA), involves multiple annealing and looping-based amplification cycles (MALBAC) for single-cell whole-genome amplification. Aneuploidy is determined by CNVs, whereas SNVs associated with the monogenic diseases are detected by PCR amplification of the MALBAC product. The false-positive and -negative SNVs are avoided by an NGS-based linkage analysis. Two healthy babies, free of the monogenic diseases of their parents, were born after such embryo selection. The monogenic diseases originated from a single base mutation on the autosome and the X-chromosome of the disease-carrying father and mother, respectively.
Piwowar-Manning, Estelle; Fogel, Jessica M.; Richardson, Paul; Wolf, Shauna; Clarke, William; Marzinke, Mark A.; Fiamma, Agnès; Donnell, Deborah; Kulich, Michal; Mbwambo, Jessie K.K.; Richter, Linda; Gray, Glenda; Sweat, Michael; Coates, Thomas J.; Eshleman, Susan H.
2015-01-01
Background Fourth-generation HIV assays detect both antigen and antibody, facilitating detection of acute/early HIV infection. The Bio-Rad GS HIV Combo Ag/Ab assay (Bio-Rad Combo) is an enzyme immunoassay that simultaneously detects HIV p24 antigen and antibodies to HIV-1 and HIV-2 in serum or plasma. Objective To evaluate the performance of the Bio-Rad Combo assay for detection of HIV infection in adults from Southern Africa. Study design Samples were obtained from adults in Soweto and Vulindlela, South Africa and Dar es Salaam, Tanzania (300 HIV-positive samples; 300 HIV-negative samples; 12 samples from individuals previously classified as having acute/early HIV infection). The samples were tested with the Bio-Rad Combo assay. Additional testing was performed to characterize the 12 acute/early samples. Results All 300 HIV-positive samples were reactive using the Bio-Rad Combo assay; false positive test results were obtained for 10 (3.3%) of the HIV-negative samples (sensitivity: 100%, 95% confidence interval [CI]: 98.8–100%); specificity: 96.7%, 95% CI: 94.0–98.4%). The assay detected 10 of the 12 infections classified as acute/early. The two infections that were not detected had viral loads < 400 copies/mL; one of those samples contained antiretroviral drugs consistent with antiretroviral therapy. Conclusions The Bio-Rad Combo assay correctly classified the majority of study specimens. The specificity reported here may be higher than that seen in other settings, since HIV-negative samples were pre-screened using a different fourth-generation test. The assay also had high sensitivity for detection of acute/early infection. False-negative test results may be obtained in individuals who are virally suppressed. PMID:25542477
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
The distinctiveness heuristic in false recognition and false recall.
McCabe, David P; Smith, Anderson D
2006-07-01
The effects of generative processing on false recognition and recall were examined in four experiments using the Deese-Roediger-McDermott false memory paradigm (Deese, 1959; Roediger & McDermott, 1995). In each experiment, a Generate condition in which subjects generated studied words from audio anagrams was compared to a Control condition in which subjects simply listened to studied words presented normally. Rates of false recognition and false recall were lower for critical lures associated with generated lists, than for critical lures associated with control lists, but only in between-subjects designs. False recall and recognition did not differ when generate and control conditions were manipulated within-subjects. This pattern of results is consistent with the distinctiveness heuristic (Schacter, Israel, & Racine, 1999), a metamemorial decision-based strategy whereby global changes in decision criteria lead to reductions of false memories. This retrieval-based monitoring mechanism appears to operate in a similar fashion in reducing false recognition and false recall.
Pathway analysis with next-generation sequencing data.
Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao
2015-04-01
Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Kurt Vedros; Robert Youngblood
This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less
Follow up of stellar migrants from globular clusters using the Hobby-Eberly Telescope
NASA Astrophysics Data System (ADS)
Shetrone, Matthew D.; Martell, Sarah L.
2017-01-01
Nearly all globular clusters contain at least two populations of stars. The first generation has abundances very similar to that of the average Milky Way halo stars at that metallicity. The second generation, presumably polluted by the massive stars of the first generation, have abundance patterns which include lower abundances of C, O, and Mg and higher abundances of N, Al and Na compared to first generation. Martell & Grebel (2010) identified a number of potential second generation stars using the CH and CN bandstrengths from SDSS-II/SEGUE spectra. We have followed up these candidates with moderate resolution spectra using HRS on the Hobby-Eberly Telescope. We present the success rate of finding globular cluster migrants and discuss the reasons why some stars exhibit a CN false positive signal in CN and CH.
Yeo, Zhen Xuan; Wong, Joshua Chee Leong; Rozen, Steven G; Lee, Ann Siew Gek
2014-06-24
The Ion Torrent PGM is a popular benchtop sequencer that shows promise in replacing conventional Sanger sequencing as the gold standard for mutation detection. Despite the PGM's reported high accuracy in calling single nucleotide variations, it tends to generate many false positive calls in detecting insertions and deletions (indels), which may hinder its utility for clinical genetic testing. Recently, the proprietary analytical workflow for the Ion Torrent sequencer, Torrent Suite (TS), underwent a series of upgrades. We evaluated three major upgrades of TS by calling indels in the BRCA1 and BRCA2 genes. Our analysis revealed that false negative indels could be generated by TS under both default calling parameters and parameters adjusted for maximum sensitivity. However, indel calling with the same data using the open source variant callers, GATK and SAMtools showed that false negatives could be minimised with the use of appropriate bioinformatics analysis. Furthermore, we identified two variant calling measures, Quality-by-Depth (QD) and VARiation of the Width of gaps and inserts (VARW), which substantially reduced false positive indels, including non-homopolymer associated errors without compromising sensitivity. In our best case scenario that involved the TMAP aligner and SAMtools, we achieved 100% sensitivity, 99.99% specificity and 29% False Discovery Rate (FDR) in indel calling from all 23 samples, which is a good performance for mutation screening using PGM. New versions of TS, BWA and GATK have shown improvements in indel calling sensitivity and specificity over their older counterpart. However, the variant caller of TS exhibits a lower sensitivity than GATK and SAMtools. Our findings demonstrate that although indel calling from PGM sequences may appear to be noisy at first glance, proper computational indel calling analysis is able to maximize both the sensitivity and specificity at the single base level, paving the way for the usage of this technology for future clinical genetic testing.
Breast MR segmentation and lesion detection with cellular neural networks and 3D template matching.
Ertaş, Gökhan; Gülçür, H Ozcan; Osman, Onur; Uçan, Osman N; Tunaci, Mehtap; Dursun, Memduh
2008-01-01
A novel fully automated system is introduced to facilitate lesion detection in dynamic contrast-enhanced, magnetic resonance mammography (DCE-MRM). The system extracts breast regions from pre-contrast images using a cellular neural network, generates normalized maximum intensity-time ratio (nMITR) maps and performs 3D template matching with three layers of 12x12 cells to detect lesions. A breast is considered to be properly segmented when relative overlap >0.85 and misclassification rate <0.10. Sensitivity, false-positive rate per slice and per lesion are used to assess detection performance. The system was tested with a dataset of 2064 breast MR images (344slicesx6 acquisitions over time) from 19 women containing 39 marked lesions. Ninety-seven percent of the breasts were segmented properly and all the lesions were detected correctly (detection sensitivity=100%), however, there were some false-positive detections (31%/lesion, 10%/slice).
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
[Evaluation of quality of HIV diagnostic procedures in Poland].
Parczewski, Miłosz; Madaliński, Kazimierz; Leszczyszyn-Pynka, Magdalena; Boroń-Kaczmarska, Anna
2010-01-01
The aim of this work was quality assessment of HIV diagnostic procedures in Poland, including human and technical resources as well as laboratory practice. Sixty questionnaires were distributed among diagnostic centers to obtain qualitative data. Basing on the survey data serological control using coded panels of HIV-1/2 samples was performed. Thirty-one filled questionnaires were received (50.8%). Surveyed laboratories perform from 350 to 5500 serological screening tests per year. In most of laboratories fourth generation assays are available, while Blood Donation Centers screen the blood both with serological assays and by HIV-RNA detection. Sanitary and Epidemiological Stations and academic laboratories hold the ISO/IEC 17025 or IS0 9001:2001 accreditation, five of the surveyed centers participate in Labquality assurance and two in Quality Control in Molecular Diagnostics programs. Data of control serological testing were received from 21 centers. In the quality control assessment 194 analyses were performed with 91 true negative, 2 false negative, 96 true positive and 5 false positive results. False negative rate of % and false positive rate of 5.2% was noted for this study. Currently, virtually no guidelines related to the HIV-diagnostics quality assurance and control in Poland are in delineated. Development of the national unified quality control system, basing on the central institution is highly desirable. National certification within the frames of the quality control and assurance program should be mandatory for all the diagnostic labs, and aim at improvement of reliability of the result distributed among clinicians and patients.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Precision and recall estimates for two-hybrid screens
Huang, Hailiang; Bader, Joel S.
2009-01-01
Motivation: Yeast two-hybrid screens are an important method to map pairwise protein interactions. This method can generate spurious interactions (false discoveries), and true interactions can be missed (false negatives). Previously, we reported a capture–recapture estimator for bait-specific precision and recall. Here, we present an improved method that better accounts for heterogeneity in bait-specific error rates. Result: For yeast, worm and fly screens, we estimate the overall false discovery rates (FDRs) to be 9.9%, 13.2% and 17.0% and the false negative rates (FNRs) to be 51%, 42% and 28%. Bait-specific FDRs and the estimated protein degrees are then used to identify protein categories that yield more (or fewer) false positive interactions and more (or fewer) interaction partners. While membrane proteins have been suggested to have elevated FDRs, the current analysis suggests that intrinsic membrane proteins may actually have reduced FDRs. Hydrophobicity is positively correlated with decreased error rates and fewer interaction partners. These methods will be useful for future two-hybrid screens, which could use ultra-high-throughput sequencing for deeper sampling of interacting bait–prey pairs. Availability: All software (C source) and datasets are available as supplemental files and at http://www.baderzone.org under the Lesser GPL v. 3 license. Contact: joel.bader@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19091773
Comparison of normalization methods for the analysis of metagenomic gene abundance data.
Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik
2018-04-20
In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Serological Diagnosis of Chronic Chagas Disease: Is It Time for a Change?
Abras, Alba; Gállego, Montserrat; Llovet, Teresa; Tebar, Silvia; Herrero, Mercedes; Berenguer, Pere; Ballart, Cristina; Martí, Carmen
2016-01-01
Chagas disease has spread to areas that are nonendemic for the disease with human migration. Since no single reference standard test is available, serological diagnosis of chronic Chagas disease requires at least two tests. New-generation techniques have significantly improved the accuracy of Chagas disease diagnosis by the use of a large mixture of recombinant antigens with different detection systems, such as chemiluminescence. The aim of the present study was to assess the overall accuracy of a new-generation kit, the Architect Chagas (cutoff, ≥1 sample relative light units/cutoff value [S/CO]), as a single technique for the diagnosis of chronic Chagas disease. The Architect Chagas showed a sensitivity of 100% (95% confidence interval [CI], 99.5 to 100%) and a specificity of 97.6% (95% CI, 95.2 to 99.9%). Five out of six false-positive serum samples were a consequence of cross-reactivity with Leishmania spp., and all of them achieved results of <5 S/CO. We propose the Architect Chagas as a single technique for screening in blood banks and for routine diagnosis in clinical laboratories. Only gray-zone and positive sera with a result of ≤6 S/CO would need to be confirmed by a second serological assay, thus avoiding false-positive sera and the problem of cross-reactivity with Leishmania species. The application of this proposal would result in important savings in the cost of Chagas disease diagnosis and therefore in the management and control of the disease. PMID:27053668
Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.
2010-01-01
Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293
State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity
Carrara, Matteo; Beccuti, Marco; Lazzarato, Fulvio; Cavallo, Federica; Cordero, Francesca; Donatelli, Susanna; Calogero, Raffaele A.
2013-01-01
Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way. Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions. Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms. PMID:23555082
Crompot, Emerence; Van Damme, Michael; Duvillier, Hugues; Pieters, Karlien; Vermeesch, Marjorie; Perez-Morga, David; Meuleman, Nathalie; Mineur, Philippe; Bron, Dominique; Lagneaux, Laurence; Stamatopoulos, Basile
2015-01-01
Microparticles (MPs), also called microvesicles (MVs) are plasma membrane-derived fragments with sizes ranging from 0.1 to 1μm. Characterization of these MPs is often performed by flow cytometry but there is no consensus on the appropriate negative control to use that can lead to false positive results. We analyzed MPs from platelets, B-cells, T-cells, NK-cells, monocytes, and chronic lymphocytic leukemia (CLL) B-cells. Cells were purified by positive magnetic-separation and cultured for 48h. Cells and MPs were characterized using the following monoclonal antibodies (CD19,20 for B-cells, CD3,8,5,27 for T-cells, CD16,56 for NK-cells, CD14,11c for monocytes, CD41,61 for platelets). Isolated MPs were stained with annexin-V-FITC and gated between 300nm and 900nm. The latex bead technique was then performed for easy detection of MPs. Samples were analyzed by Transmission (TEM) and Scanning Electron microscopy (SEM). Annexin-V positive events within a gate of 300-900nm were detected and defined as MPs. Our results confirmed that the characteristic antigens CD41/CD61 were found on platelet-derived-MPs validating our technique. However, for MPs derived from other cell types, we were unable to detect any antigen, although they were clearly expressed on the MP-producing cells in the contrary of several data published in the literature. Using the latex bead technique, we confirmed detection of CD41,61. However, the apparent expression of other antigens (already deemed positive in several studies) was determined to be false positive, indicated by negative controls (same labeling was used on MPs from different origins). We observed that mother cell antigens were not always detected on corresponding MPs by direct flow cytometry or latex bead cytometry. Our data highlighted that false positive results could be generated due to antibody aspecificity and that phenotypic characterization of MPs is a difficult field requiring the use of several negative controls.
Crompot, Emerence; Van Damme, Michael; Duvillier, Hugues; Pieters, Karlien; Vermeesch, Marjorie; Perez-Morga, David; Meuleman, Nathalie; Mineur, Philippe; Bron, Dominique; Lagneaux, Laurence; Stamatopoulos, Basile
2015-01-01
Background Microparticles (MPs), also called microvesicles (MVs) are plasma membrane-derived fragments with sizes ranging from 0.1 to 1μm. Characterization of these MPs is often performed by flow cytometry but there is no consensus on the appropriate negative control to use that can lead to false positive results. Materials and Methods We analyzed MPs from platelets, B-cells, T-cells, NK-cells, monocytes, and chronic lymphocytic leukemia (CLL) B-cells. Cells were purified by positive magnetic-separation and cultured for 48h. Cells and MPs were characterized using the following monoclonal antibodies (CD19,20 for B-cells, CD3,8,5,27 for T-cells, CD16,56 for NK-cells, CD14,11c for monocytes, CD41,61 for platelets). Isolated MPs were stained with annexin-V-FITC and gated between 300nm and 900nm. The latex bead technique was then performed for easy detection of MPs. Samples were analyzed by Transmission (TEM) and Scanning Electron microscopy (SEM). Results Annexin-V positive events within a gate of 300-900nm were detected and defined as MPs. Our results confirmed that the characteristic antigens CD41/CD61 were found on platelet-derived-MPs validating our technique. However, for MPs derived from other cell types, we were unable to detect any antigen, although they were clearly expressed on the MP-producing cells in the contrary of several data published in the literature. Using the latex bead technique, we confirmed detection of CD41,61. However, the apparent expression of other antigens (already deemed positive in several studies) was determined to be false positive, indicated by negative controls (same labeling was used on MPs from different origins). Conclusion We observed that mother cell antigens were not always detected on corresponding MPs by direct flow cytometry or latex bead cytometry. Our data highlighted that false positive results could be generated due to antibody aspecificity and that phenotypic characterization of MPs is a difficult field requiring the use of several negative controls. PMID:25978814
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads
Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets. PMID:28467460
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads.
Petersen, Thomas Nordahl; Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets.
NASA Astrophysics Data System (ADS)
Ha, Minsu; Nehm, Ross H.
2016-06-01
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.
Performance comparison of two androgen receptor splice variant 7 (AR-V7) detection methods.
Bernemann, Christof; Steinestel, Julie; Humberg, Verena; Bögemann, Martin; Schrader, Andres Jan; Lennerz, Jochen K
2018-01-23
To compare the performance of two established androgen receptor splice variant 7 (AR-V7) mRNA detection systems, as paradoxical responses to next-generation androgen-deprivation therapy in AR-V7 mRNA-positive circulating tumour cells (CTC) of patients with castration-resistant prostate cancer (CRPC) could be related to false-positive classification using detection systems with different sensitivities. We compared the performance of two established mRNA-based AR-V7 detection technologies using either SYBR Green or TaqMan chemistries. We assessed in vitro performance using eight genitourinary cancer cell lines and serial dilutions in three AR-V7-positive prostate cancer cell lines, as well as in 32 blood samples from patients with CRPC. Both assays performed identically in the cell lines and serial dilutions showed identical diagnostic thresholds. Performance comparison in 32 clinical patient samples showed perfect concordance between the assays. In particular, both assays determined AR-V7 mRNA-positive CTCs in three patients with unexpected responses to next-generation anti-androgen therapy. Thus, technical differences between the assays can be excluded as the underlying reason for the unexpected responses to next-generation anti-androgen therapy in a subset of AR-V7 patients. Irrespective of the method used, patients with AR-V7 mRNA-positive CRPC should not be systematically precluded from an otherwise safe treatment option. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
Bansal, Ravi; Peterson, Bradley S
2018-06-01
Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.
Automated Plantation Mapping in Indonesia Using Remote Sensing Data
NASA Astrophysics Data System (ADS)
Karpatne, A.; Jia, X.; Khandelwal, A.; Kumar, V.
2017-12-01
Plantation mapping is critical for understanding and addressing deforestation, a key driver of climate change and ecosystem degradation. Unfortunately, most plantation maps are limited to small areas for specific years because they rely on visual inspection of imagery. In this work, we propose a data-driven approach which automatically generates yearly plantation maps for large regions using MODIS multi-spectral data. While traditional machine learning algorithms face manifold challenges in this task, e.g. imperfect training labels, spatio-temporal data heterogeneity, noisy and high-dimensional data, lack of evaluation data, etc., we introduce a novel deep learning-based framework that combines existing imperfect plantation products as training labels and models the spatio-temporal relationships of land covers. We also explores the post-processing steps based on Hidden Markov Model that further improve the detection accuracy. Then we conduct extensive evaluation of the generated plantation maps. Specifically, by randomly sampling and comparing with high-resolution Digital Globe imagery, we demonstrate that the generated plantation maps achieve both high precision and high recall. When compared with existing plantation mapping products, our detection can avoid both false positives and false negatives. Finally, we utilize the generated plantation maps in analyzing the relationship between forest fires and growth of plantations, which assists in better understanding the cause of deforestation in Indonesia.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Cole, Laurence A; Khanlian, Sarah A
2004-05-01
False-positive hCG results can lead to erroneous diagnoses and needless chemotherapy and surgery. In the last 2 years, eight publications described cases involving false-positive hCG tests; all eight involved the AxSym test. We investigated the source of this abundance of cases and a simple fix that may be used by clinical laboratories. False-positive hCG was primarily identified by absence of hCG in urine and varying or negative hCG results in alternative tests. Seventeen false-positive serum samples in the AxSym test were evaluated undiluted and at twofold dilution with diluent containing excess goat serum or immunoglobulin. We identified 58 patients with false-positive hCG, 47 of 58 due to the Abbott AxSym total hCGbeta test (81%). Sixteen of 17 of these "false-positive" results (mean 100 mIU/ml) became undetectable when tested again after twofold dilution. A simple twofold dilution with this diluent containing excess goat serum or immunoglobulin completely protected 16 of 17 samples from patients having false-positive results. It is recommended that laboratories using this test use twofold dilution as a minimum to prevent false-positive results.
Application of the LDM algorithm to identify small lung nodules on low-dose MSCT scans
NASA Astrophysics Data System (ADS)
Zhao, Binsheng; Ginsberg, Michelle S.; Lefkowitz, Robert A.; Jiang, Li; Cooper, Cathleen; Schwartz, Lawrence H.
2004-05-01
In this work, we present a computer-aided detection (CAD) algorithm for small lung nodules on low-dose MSCT images. With this technique, identification of potential lung nodules is carried out with a local density maximum (LDM) algorithm, followed by reduction of false positives from the nodule candidates using task-specific 2-D/3-D features along with a knowledge-based nodule inclusion/exclusion strategy. Twenty-eight MSCT scans (40/80mAs, 120kVp, 5mm collimation/2.5mm reconstruction) from our lung cancer screening program that included at least one lung nodule were selected for this study. Two radiologists independently interpreted these cases. Subsequently, a consensus reading by both radiologists and CAD was generated to define a "gold standard". In total, 165 nodules were considered as the "gold standard" (average: 5.9 nodules/case; range: 1-22 nodules/case). The two radiologists detected 146 nodules (88.5%) and CAD detected 100 nodules (60.6%) with 8.7 false-positives/case. CAD detected an additional 19 nodules (6 nodules > 3mm and 13 nodules < 3mm) that had been missed by both radiologists. Preliminary results show that the CAD is capable of detecting small lung nodules with acceptable number of false-positives on low-dose MSCT scans and it can detect nodules that are otherwise missed by radiologists, though a majority are small nodules (< 3mm).
Guess LOD approach: sufficient conditions for robustness.
Williamson, J A; Amos, C I
1995-01-01
Analysis of genetic linkage between a disease and a marker locus requires specifying a genetic model describing both the inheritance pattern and the gene frequencies of the marker and trait loci. Misspecification of the genetic model is likely for etiologically complex diseases. In previous work we have shown through analytic studies that misspecifying the genetic model for disease inheritance does not lead to excess false-positive evidence for genetic linkage provided the genetic marker alleles of all pedigree members are known, or can be inferred without bias from the data. Here, under various selection or ascertainment schemes we extend these previous results to situations in which the genetic model for the marker locus may be incorrect. We provide sufficient conditions for the asymptotic unbiased estimation of the recombination fraction under the null hypothesis of no linkage, and also conditions for the limiting distribution of the likelihood ratio test for no linkage to be chi-squared. Through simulation studies we document some situations under which asymptotic bias can result when the genetic model is misspecified. Among those situations under which an excess of false-positive evidence for genetic linkage can be generated, the most common is failure to provide accurate estimates of the marker allele frequencies. We show that in most cases false-positive evidence for genetic linkage is unlikely to result solely from the misspecification of the genetic model for disease or trait inheritance.
Samarakoon, Pubudu Saneth; Sorte, Hanne Sørmo; Stray-Pedersen, Asbjørg; Rødningen, Olaug Kristin; Rognes, Torbjørn; Lyle, Robert
2016-01-14
With advances in next generation sequencing technology and analysis methods, single nucleotide variants (SNVs) and indels can be detected with high sensitivity and specificity in exome sequencing data. Recent studies have demonstrated the ability to detect disease-causing copy number variants (CNVs) in exome sequencing data. However, exonic CNV prediction programs have shown high false positive CNV counts, which is the major limiting factor for the applicability of these programs in clinical studies. We have developed a tool (cnvScan) to improve the clinical utility of computational CNV prediction in exome data. cnvScan can accept input from any CNV prediction program. cnvScan consists of two steps: CNV screening and CNV annotation. CNV screening evaluates CNV prediction using quality scores and refines this using an in-house CNV database, which greatly reduces the false positive rate. The annotation step provides functionally and clinically relevant information using multiple source datasets. We assessed the performance of cnvScan on CNV predictions from five different prediction programs using 64 exomes from Primary Immunodeficiency (PIDD) patients, and identified PIDD-causing CNVs in three individuals from two different families. In summary, cnvScan reduces the time and effort required to detect disease-causing CNVs by reducing the false positive count and providing annotation. This improves the clinical utility of CNV detection in exome data.
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
Dachman, Abraham H.; Wroblewski, Kristen; Vannier, Michael W.; Horne, John M.
2014-01-01
Computed tomography (CT) colonography is a screening modality used to detect colonic polyps before they progress to colorectal cancer. Computer-aided detection (CAD) is designed to decrease errors of detection by finding and displaying polyp candidates for evaluation by the reader. CT colonography CAD false-positive results are common and have numerous causes. The relative frequency of CAD false-positive results and their effect on reader performance on the basis of a 19-reader, 100-case trial shows that the vast majority of CAD false-positive results were dismissed by readers. Many CAD false-positive results are easily disregarded, including those that result from coarse mucosa, reconstruction, peristalsis, motion, streak artifacts, diverticulum, rectal tubes, and lipomas. CAD false-positive results caused by haustral folds, extracolonic candidates, diminutive lesions (<6 mm), anal papillae, internal hemorrhoids, varices, extrinsic compression, and flexural pseudotumors are almost always recognized and disregarded. The ileocecal valve and tagged stool are common sources of CAD false-positive results associated with reader false-positive results. Nondismissable CAD soft-tissue polyp candidates larger than 6 mm are another common cause of reader false-positive results that may lead to further evaluation with follow-up CT colonography or optical colonoscopy. Strategies for correctly evaluating CAD polyp candidates are important to avoid pitfalls from common sources of CAD false-positive results. ©RSNA, 2014 PMID:25384290
Berlin, Sofia; Smith, Nick G C
2005-11-10
Adaptive evolution appears to be a common feature of reproductive proteins across a very wide range of organisms. A promising way of addressing the evolutionary forces responsible for this general phenomenon is to test for adaptive evolution in the same gene but among groups of species, which differ in their reproductive biology. One can then test evolutionary hypotheses by asking whether the variation in adaptive evolution is consistent with the variation in reproductive biology. We have attempted to apply this approach to the study of a female reproductive protein, zona pellucida C (ZPC), which has been previously shown by the use of likelihood ratio tests (LRTs) to be under positive selection in mammals. We tested for evidence of adaptive evolution of ZPC in 15 mammalian species, in 11 avian species and in six fish species using three different LRTs (M1a-M2a, M7-M8, and M8a-M8). The only significant findings of adaptive evolution came from the M7-M8 test in mammals and fishes. Since LRTs of adaptive evolution may yield false positives in some situations, we examined the properties of the LRTs by several different simulation methods. When we simulated data to test the robustness of the LRTs, we found that the pattern of evolution in ZPC generates an excess of false positives for the M7-M8 LRT but not for the M1a-M2a or M8a-M8 LRTs. This bias is strong enough to have generated the significant M7-M8 results for mammals and fishes. We conclude that there is no strong evidence for adaptive evolution of ZPC in any of the vertebrate groups we studied, and that the M7-M8 LRT can be biased towards false inference of adaptive evolution by certain patterns of non-adaptive evolution.
A Closer Look at Self-Reported Suicide Attempts: False Positives and False Negatives
ERIC Educational Resources Information Center
Ploderl, Martin; Kralovec, Karl; Yazdi, Kurosch; Fartacek, Reinhold
2011-01-01
The validity of self-reported suicide attempt information is undermined by false positives (e.g., incidences without intent to die), or by unreported suicide attempts, referred to as false negatives. In a sample of 1,385 Austrian adults, we explored the occurrence of false positives and false negatives with detailed, probing questions. Removing…
Brassel, J; Rohrssen, F; Failing, K; Wehrend, A
2018-06-11
To evaluate the performance of a novel accelerometer-based oestrus detection system (ODS) for dairy cows on pasture, in comparison with measurement of concentrations of progesterone in milk, ultrasonographic examination of ovaries and farmer observations. Mixed-breed lactating dairy cows (n=109) in a commercial, seasonal-calving herd managed at pasture under typical farming conditions in Ireland, were fitted with oestrus detection collars 3 weeks prior to mating start date. The ODS performed multimetric analysis of eight different motion patterns to generate oestrus alerts. Data were collected during the artificial insemination period of 66 days, commencing on 16 April 2015. Transrectal ultrasonographic examinations of the reproductive tract and measurements of concentrations of progesterone in milk were used to confirm oestrus events. Visual observations by the farmer and the number of theoretically expected oestrus events were used to evaluate the number of false negative ODS alerts. The percentage of eligible cows that were detected in oestrus at least once (and were confirmed true positives) was calculated for the first 21, 42 and 63 days of the insemination period. During the insemination period, the ODS generated 194 oestrus alerts and 140 (72.2%) were confirmed as true positives. Six confirmed oestrus events recognised by the farmer did not generate ODS alerts. The positive predictive value of the ODS was 72.2 (95% CI=65.3-78.4)%. To account for oestrus events not identified by the ODS or the farmer, four theoretical missed oestrus events were added to the false negatives. Estimated sensitivity of the automated ODS was 93.3 (95% CI=88.1-96.8)%. The proportion of eligible cows that were detected in oestrus during the first 21 days of the insemination period was 92/106 (86.8%), and during the first 42 and 63 days of the insemination period was 103/106 (97.2%) and 105/106 (99.1%), respectively. The ODS under investigation was suitable for oestrus detection in dairy cows on pasture and showed a high sensitivity of oestrus detection. Multimetric analysis of behavioural data seems to be the superior approach to developing and improving ODS for dairy cows on pasture. Due to a high proportion of false positive alerts, its use as a stand-alone system for oestrus detection cannot be recommended. As it is the first time the system was investigated, testing on other farms would be necessary for further validation.
ERIC Educational Resources Information Center
Greyson, Bruce
2005-01-01
Some persons who claim to have had near-death experiences (NDEs) fail research criteria for having had NDEs ("false positives"); others who deny having had NDEs do meet research criteria for having had NDEs ("false negatives"). The author evaluated false positive claims and false negative denials in an organization that promotes near-death…
Risk of breast cancer after false-positive results in mammographic screening.
Román, Marta; Castells, Xavier; Hofvind, Solveig; von Euler-Chelpin, My
2016-06-01
Women with false-positive results are commonly referred back to routine screening. Questions remain regarding their long-term outcome of breast cancer. We assessed the risk of screen-detected breast cancer in women with false-positive results. We conducted a joint analysis using individual level data from the population-based screening programs in Copenhagen and Funen in Denmark, Norway, and Spain. Overall, 150,383 screened women from Denmark (1991-2008), 612,138 from Norway (1996-2010), and 1,172,572 from Spain (1990-2006) were included. Poisson regression was used to estimate the relative risk (RR) of screen-detected cancer for women with false-positive versus negative results. We analyzed information from 1,935,093 women 50-69 years who underwent 6,094,515 screening exams. During an average 5.8 years of follow-up, 230,609 (11.9%) women received a false-positive result and 27,849 (1.4%) were diagnosed with screen-detected cancer. The adjusted RR of screen-detected cancer after a false-positive result was 2.01 (95% CI: 1.93-2.09). Women who tested false-positive at first screen had a RR of 1.86 (95% CI: 1.77-1.96), whereas those who tested false-positive at third screening had a RR of 2.42 (95% CI: 2.21-2.64). The RR of breast cancer at the screening test after the false-positive result was 3.95 (95% CI: 3.71-4.21), whereas it decreased to 1.25 (95% CI: 1.17-1.34) three or more screens after the false-positive result. Women with false-positive results had a twofold risk of screen-detected breast cancer compared to women with negative tests. The risk remained significantly higher three or more screens after the false-positive result. The increased risk should be considered when discussing stratified screening strategies. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Schwartz, Lisa M; Woloshin, Steven; Sox, Harold C; Fischhoff, Baruch; Welch, H Gilbert
2000-01-01
Objective To determine women's attitudes to and knowledge of both false positive mammography results and the detection of ductal carcinoma in situ after screening mammography. Design Cross sectional survey. Setting United States. Participants 479 women aged 18-97 years who did not report a history of breast cancer. Main outcome measures Attitudes to and knowledge of false positive results and the detection of ductal carcinoma in situ after screening mammography. Results Women were aware that false positive results do occur. Their median estimate of the false positive rate for 10 years of annual screening was 20% (25th percentile estimate, 10%; 75th percentile estimate, 45%). The women were highly tolerant of false positives: 63% thought that 500 or more false positives per life saved was reasonable and 37% would tolerate 10 000 or more. Women who had had a false positive result (n=76) expressed the same high tolerance: 39% would tolerate 10 000 or more false positives. 62% of women did not want to take false positive results into account when deciding about screening. Only 8% of women thought that mammography could harm a woman without breast cancer, and 94% doubted the possibility of non-progressive breast cancers. Few had heard about ductal carcinoma in situ, a cancer that may not progress, but when informed, 60% of women wanted to take into account the possibility of it being detected when deciding about screening. Conclusions Women are aware of false positives and seem to view them as an acceptable consequence of screening mammography. In contrast, most women are unaware that screening can detect cancers that may never progress but feel that such information would be relevant. Education should perhaps focus less on false positives and more on the less familiar outcome of detection of ductal carcinoma in situ. PMID:10856064
Automated detection of lung nodules with three-dimensional convolutional neural networks
NASA Astrophysics Data System (ADS)
Pérez, Gustavo; Arbeláez, Pablo
2017-11-01
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
Species classifier choice is a key consideration when analysing low-complexity food microbiome data.
Walsh, Aaron M; Crispie, Fiona; O'Sullivan, Orla; Finnegan, Laura; Claesson, Marcus J; Cotter, Paul D
2018-03-20
The use of shotgun metagenomics to analyse low-complexity microbial communities in foods has the potential to be of considerable fundamental and applied value. However, there is currently no consensus with respect to choice of species classification tool, platform, or sequencing depth. Here, we benchmarked the performances of three high-throughput short-read sequencing platforms, the Illumina MiSeq, NextSeq 500, and Ion Proton, for shotgun metagenomics of food microbiota. Briefly, we sequenced six kefir DNA samples and a mock community DNA sample, the latter constructed by evenly mixing genomic DNA from 13 food-related bacterial species. A variety of bioinformatic tools were used to analyse the data generated, and the effects of sequencing depth on these analyses were tested by randomly subsampling reads. Compositional analysis results were consistent between the platforms at divergent sequencing depths. However, we observed pronounced differences in the predictions from species classification tools. Indeed, PERMANOVA indicated that there was no significant differences between the compositional results generated by the different sequencers (p = 0.693, R 2 = 0.011), but there was a significant difference between the results predicted by the species classifiers (p = 0.01, R 2 = 0.127). The relative abundances predicted by the classifiers, apart from MetaPhlAn2, were apparently biased by reference genome sizes. Additionally, we observed varying false-positive rates among the classifiers. MetaPhlAn2 had the lowest false-positive rate, whereas SLIMM had the greatest false-positive rate. Strain-level analysis results were also similar across platforms. Each platform correctly identified the strains present in the mock community, but accuracy was improved slightly with greater sequencing depth. Notably, PanPhlAn detected the dominant strains in each kefir sample above 500,000 reads per sample. Again, the outputs from functional profiling analysis using SUPER-FOCUS were generally accordant between the platforms at different sequencing depths. Finally, and expectedly, metagenome assembly completeness was significantly lower on the MiSeq than either on the NextSeq (p = 0.03) or the Proton (p = 0.011), and it improved with increased sequencing depth. Our results demonstrate a remarkable similarity in the results generated by the three sequencing platforms at different sequencing depths, and, in fact, the choice of bioinformatics methodology had a more evident impact on results than the choice of sequencer did.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Seo, Ja Young; Park, Hyung-Doo; Kim, Jong Won; Oh, Hyeon Ju; Yang, Jeong Soo; Chang, Yun Sil; Park, Won Soon; Lee, Soo-Youn
2014-01-01
Newborn screening for congenital adrenal hyperplasia (CAH) based on measuring 17-hydroxyprogesterone (17-OHP) by immunoassay generates a number of false-positive results, especially in preterm neonates. We applied steroid profiling by using liquid chromatography-tandem mass spectrometry (LC-MS/MS) as a second-tier test in newborns with positive CAH screening and evaluated its clinical utility in a tertiary care hospital setting. By performing a 4-year retrospective data review, we were able to test 121 dried blood spots from newborns with positive CAH screening for 17-OHP, androstenedione and cortisol levels by LC-MS/MS. We prospectively evaluated the clinical utility of steroid profiling after the implementation of steroid profiling as a second-tier test in our routine clinical practice. During the 2-year prospective study period, 104 cases with positive initial screening by FIA were tested by LC-MS/MS. Clinical and laboratory follow-up were performed for at least 6 months. The preterm neonates accounted for 50.7% (76/150) and 70.4% (88/125) of screening-positive cases in retrospective and prospective cohorts, respectively. By applying steroid profiling as a second-tier test for positive CAH screening, we eliminated all false-positive results and decreased the median follow-up time from 75 to 8 days. Our data showed that steroid profiling reduced the burden of follow-up exams by improving the positive predictive value of the CAH screening program. The use of steroid profiling as a second-tier test for positive CAH screening will improve clinical practice particularly in a tertiary care hospital setting where positive CAH screening from preterm neonates is frequently encountered.
Kepler Certified False Positive Table
NASA Technical Reports Server (NTRS)
Bryson, Stephen T.; Batalha, Natalie Marie; Colon, Knicole Dawn; Coughlin, Jeffrey Langer; Haas, Michael R.; Henze, Chris; Huber, Daniel; Morton, Tim; Rowe, Jason Frank; Mullally, Susan Elizabeth;
2017-01-01
This document describes the Kepler Certied False Positive table hosted at the Exoplanet Archive1, herein referred to as the CFP table. This table is the result of detailed examination by the Kepler False Positive Working Group (FPWG) of declared false positives in the Kepler Object of Interest (KOI) tables (see, for example, Batalha et al. (2012); Burke et al.(2014); Rowe et al. (2015); Mullally et al. (2015); Coughlin et al. (2015b)) at the Exoplanet Archive. A KOI is considered a false positive if it is not due to a planet orbiting the KOI's target star. The CFP table contains all KOIs in the Exoplanet Archive cumulative KOI table. The purpose of the CFP table is to provide a list of certified false positive KOIs. A KOI is certified as a false positive when, in the judgement of the FPWG, there is no plausible planetary interpretation of the observational evidence, which we summarize by saying that the evidence for a false positive is compelling. This certification process involves detailed examination using all available data for each KOI, establishing a high-reliability ground truth set. The CFP table can be used to estimate the reliability of, for example, the KOI tables which are created using only Kepler photometric data, so the disposition of individual KOIs may differ in the KOI and CFP tables. Follow-up observers may find the CFP table useful to avoid observing false positives.
Ye, L; Jia, Z; Jung, T; Maloney, P C
2001-04-01
The topology of OxlT, the oxalate:formate exchange protein of Oxalobacter formigenes, was established by site-directed fluorescence labeling, a simple strategy that generates topological information in the context of the intact protein. Accessibility of cysteine to the fluorescent thiol-directed probe Oregon green maleimide (OGM) was examined for a panel of 34 single-cysteine variants, each generated in a His(9)-tagged cysteine-less host. The reaction with OGM was readily scored by examining the fluorescence profile after sodium dodecyl sulfate-polyacrylamide gel electrophoresis of material purified by Ni2+ linked affinity chromatography. A position was assigned an external location if its single-cysteine derivative reacted with OGM added to intact cells; a position was designated internal if OGM labeling required cell lysis. We also showed that labeling of external, but not internal, positions was blocked by prior exposure of cells to the impermeable and nonfluorescent thiol-specific agent ethyltrimethylammonium methanethiosulfonate. Of the 34 positions examined in this way, 29 were assigned unambiguously to either an internal or external location; 5 positions could not be assigned, since the target cysteine failed to react with OGM. There was no evidence of false-positive assignment. Our findings document a simple and rapid method for establishing the topology of a membrane protein and show that OxlT has 12 transmembrane segments, confirming inferences from hydropathy analysis.
Ye, Liwen; Jia, Zhenzhen; Jung, Thomas; Maloney, Peter C.
2001-01-01
The topology of OxlT, the oxalate:formate exchange protein of Oxalobacter formigenes, was established by site-directed fluorescence labeling, a simple strategy that generates topological information in the context of the intact protein. Accessibility of cysteine to the fluorescent thiol-directed probe Oregon green maleimide (OGM) was examined for a panel of 34 single-cysteine variants, each generated in a His9-tagged cysteine-less host. The reaction with OGM was readily scored by examining the fluorescence profile after sodium dodecyl sulfate-polyacrylamide gel electrophoresis of material purified by Ni2+-linked affinity chromatography. A position was assigned an external location if its single-cysteine derivative reacted with OGM added to intact cells; a position was designated internal if OGM labeling required cell lysis. We also showed that labeling of external, but not internal, positions was blocked by prior exposure of cells to the impermeable and nonfluorescent thiol-specific agent ethyltrimethylammonium methanethiosulfonate. Of the 34 positions examined in this way, 29 were assigned unambiguously to either an internal or external location; 5 positions could not be assigned, since the target cysteine failed to react with OGM. There was no evidence of false-positive assignment. Our findings document a simple and rapid method for establishing the topology of a membrane protein and show that OxlT has 12 transmembrane segments, confirming inferences from hydropathy analysis. PMID:11274108
Singh, Deependra; Pitkäniemi, Janne; Malila, Nea; Anttila, Ahti
2016-09-01
Mammography has been found effective as the primary screening test for breast cancer. We estimated the cumulative probability of false positive screening test results with respect to symptom history reported at screen. A historical prospective cohort study was done using individual screening data from 413,611 women aged 50-69 years with 2,627,256 invitations for mammography screening between 1992 and 2012 in Finland. Symptoms (lump, retraction, and secretion) were reported at 56,805 visits, and 48,873 visits resulted in a false positive mammography result. Generalized linear models were used to estimate the probability of at least one false positive test and true positive at screening visits. The estimates were compared among women with and without symptoms history. The estimated cumulative probabilities were 18 and 6 % for false positive and true positive results, respectively. In women with a history of a lump, the cumulative probabilities of false positive test and true positive were 45 and 16 %, respectively, compared to 17 and 5 % with no reported lump. In women with a history of any given symptom, the cumulative probabilities of false positive test and true positive were 38 and 13 %, respectively. Likewise, women with a history of a 'lump and retraction' had the cumulative false positive probability of 56 %. The study showed higher cumulative risk of false positive tests and more cancers detected in women who reported symptoms compared to women who did not report symptoms at screen. The risk varies substantially, depending on symptom types and characteristics. Information on breast symptoms influences the balance of absolute benefits and harms of screening.
Artes, Paul H; McLeod, David; Henson, David B
2002-01-01
To report on differences between the latency distributions of responses to stimuli and to false-positive catch trials in suprathreshold perimetry. To describe an algorithm for defining response time windows and to report on its performance in discriminating between true- and false-positive responses on the basis of response time (RT). A sample of 435 largely inexperienced patients underwent suprathreshold visual field examination on a perimeter that was modified to record RTs. Data were analyzed from 60,500 responses to suprathreshold stimuli and from 523 false-positive responses to catch trials. False-positive responses had much more variable latencies than responses to suprathreshold stimuli. An algorithm defining RT windows on the basis of z-transformed individual latency samples correctly identified more than 70% of false-positive responses to catch trials, whereas fewer than 3% of responses to suprathreshold stimuli were classified as false-positive responses. Latency analysis can be used to detect a substantial proportion of false-positive responses in suprathreshold perimetry. Rejection of such responses may increase the reliability of visual field screening by reducing variability and bias in a small but clinically important proportion of patients.
Muver, a computational framework for accurately calling accumulated mutations.
Burkholder, Adam B; Lujan, Scott A; Lavender, Christopher A; Grimm, Sara A; Kunkel, Thomas A; Fargo, David C
2018-05-09
Identification of mutations from next-generation sequencing data typically requires a balance between sensitivity and accuracy. This is particularly true of DNA insertions and deletions (indels), that can impart significant phenotypic consequences on cells but are harder to call than substitution mutations from whole genome mutation accumulation experiments. To overcome these difficulties, we present muver, a computational framework that integrates established bioinformatics tools with novel analytical methods to generate mutation calls with the extremely low false positive rates and high sensitivity required for accurate mutation rate determination and comparison. Muver uses statistical comparison of ancestral and descendant allelic frequencies to identify variant loci and assigns genotypes with models that include per-sample assessments of sequencing errors by mutation type and repeat context. Muver identifies maximally parsimonious mutation pathways that connect these genotypes, differentiating potential allelic conversion events and delineating ambiguities in mutation location, type, and size. Benchmarking with a human gold standard father-son pair demonstrates muver's sensitivity and low false positive rates. In DNA mismatch repair (MMR) deficient Saccharomyces cerevisiae, muver detects multi-base deletions in homopolymers longer than the replicative polymerase footprint at rates greater than predicted for sequential single-base deletions, implying a novel multi-repeat-unit slippage mechanism. Benchmarking results demonstrate the high accuracy and sensitivity achieved with muver, particularly for indels, relative to available tools. Applied to an MMR-deficient Saccharomyces cerevisiae system, muver mutation calls facilitate mechanistic insights into DNA replication fidelity.
Ibáñez-Sanz, Gemma; Garcia, Montse; Rodríguez-Moranta, Francisco; Binefa, Gemma; Gómez-Matas, Javier; Domènech, Xènia; Vidal, Carmen; Soriano, Antonio; Moreno, Víctor
2016-10-01
The most common side effect in population screening programmes is a false-positive result which leads to unnecessary risks and costs. To identify factors associated with false-positive results in a colorectal cancer screening programme with the faecal immunochemical test (FIT). Cross-sectional study of 472 participants with a positive FIT who underwent colonoscopy for confirmation of diagnosis between 2013 and 2014. A false-positive result was defined as having a positive FIT (≥20μg haemoglobin per gram of faeces) and follow-up colonoscopy without intermediate/high-risk lesions or cancer. Women showed a two-fold increased likelihood of a false-positive result compared with men (adjusted OR, 2.3; 95%CI, 1.5-3.4), but no female-specific factor was identified. The other variables associated with a false-positive result were successive screening (adjusted OR, 1.5; 95%CI, 1.0-2.2), anal disorders (adjusted OR, 3.1; 95%CI, 2.1-4.5) and the use of proton pump inhibitors (adjusted OR, 1.8; 95%CI, 1.1-2.9). Successive screening and proton pump inhibitor use were associated with FP in men. None of the other drugs were related to a false-positive FIT. Concurrent use of proton pump inhibitors at the time of FIT might increase the likelihood of a false-positive result. Further investigation is needed to determine whether discontinuing them could decrease the false-positive rate. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
False negative rates in Drosophila cell-based RNAi screens: a case study
2011-01-01
Background High-throughput screening using RNAi is a powerful gene discovery method but is often complicated by false positive and false negative results. Whereas false positive results associated with RNAi reagents has been a matter of extensive study, the issue of false negatives has received less attention. Results We performed a meta-analysis of several genome-wide, cell-based Drosophila RNAi screens, together with a more focused RNAi screen, and conclude that the rate of false negative results is at least 8%. Further, we demonstrate how knowledge of the cell transcriptome can be used to resolve ambiguous results and how the number of false negative results can be reduced by using multiple, independently-tested RNAi reagents per gene. Conclusions RNAi reagents that target the same gene do not always yield consistent results due to false positives and weak or ineffective reagents. False positive results can be partially minimized by filtering with transcriptome data. RNAi libraries with multiple reagents per gene also reduce false positive and false negative outcomes when inconsistent results are disambiguated carefully. PMID:21251254
[Roaming through methodology. XXXII. False test results].
van der Weijden, T; van den Akker, M
2001-05-12
The number of requests for diagnostic tests is rising. This leads to a higher chance of false test results. The false-negative proportion of a test is the proportion of negative test results among the diseased subjects. The false-positive proportion is the proportion of positive test results among the healthy subjects. The calculation of the false-positive proportion is often incorrect. For example, instead of 1 minus the specificity it is calculated as 1 minus the positive predictive value. This can lead to incorrect decision-making with respect to the application of the test. Physicians must apply diagnostic tests in such a way that the risk of false test results is minimal. The patient should be aware that a perfectly conclusive diagnostic test is rare in medical practice, and should more often be informed of the implications of false-positive and false-negative test results.
INFRARED- BASED BLINK DETECTING GLASSES FOR FACIAL PACING: TOWARDS A BIONIC BLINK
Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T
2015-01-01
IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step towards reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN Standard safety glasses were equipped with an infrared (IR) emitter/detector pair oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed. SETTING Tertiary care Facial Nerve Center. PARTICIPANTS 24 healthy volunteers. MAIN OUTCOME MEASURE Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted gaze from central to far peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze, but generated false-detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related lid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6.3% of the time during lateral eye movements, 10.4% during upward movements, 46.5% during downward movements, and 5.6% for movements from an upward or downward gaze back to the primary gaze. Facial expressions disrupted sensor output if they caused substantial squinting or shifted the glasses. CONCLUSION AND RELEVANCE Our blink detection system provides a reliable, non-invasive indication of eyelid closure using an invisible light beam passing in front of the eye. Future versions will aim to mitigate detection errors by using multiple IR emitter/detector pairs mounted on the glasses, and alternative frame designs may reduce shifting of the sensors relative to the eye during facial movements. PMID:24699708
Weirather, Jason L.; Afshar, Pegah Tootoonchi; Clark, Tyson A.; Tseng, Elizabeth; Powers, Linda S.; Underwood, Jason G.; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-01-01
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. PMID:26040699
Wolf, Max; Kurvers, Ralf H J M; Ward, Ashley J W; Krause, Stefan; Krause, Jens
2013-04-07
In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications.
Wolf, Max; Kurvers, Ralf H. J. M.; Ward, Ashley J. W.; Krause, Stefan; Krause, Jens
2013-01-01
In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications. PMID:23407830
Palomaki, Glenn E.; Deciu, Cosmin; Kloza, Edward M.; Lambert-Messerlian, Geralyn M.; Haddow, James E.; Neveux, Louis M.; Ehrich, Mathias; van den Boom, Dirk; Bombard, Allan T.; Grody, Wayne W.; Nelson, Stanley F.; Canick, Jacob A.
2012-01-01
Purpose: To determine whether maternal plasma cell–free DNA sequencing can effectively identify trisomy 18 and 13. Methods: Sixty-two pregnancies with trisomy 18 and 12 with trisomy 13 were selected from a cohort of 4,664 pregnancies along with matched euploid controls (including 212 additional Down syndrome and matched controls already reported), and their samples tested using a laboratory-developed, next-generation sequencing test. Interpretation of the results for chromosome 18 and 13 included adjustment for CG content bias. Results: Among the 99.1% of samples interpreted (1,971/1,988), observed trisomy 18 and 13 detection rates were 100% (59/59) and 91.7% (11/12) at false-positive rates of 0.28% and 0.97%, respectively. Among the 17 samples without an interpretation, three were trisomy 18. If z-score cutoffs for trisomy 18 and 13 were raised slightly, the overall false-positive rates for the three aneuploidies could be as low as 0.1% (2/1,688) at an overall detection rate of 98.9% (280/283) for common aneuploidies. An independent academic laboratory confirmed performance in a subset. Conclusion: Among high-risk pregnancies, sequencing circulating cell–free DNA detects nearly all cases of Down syndrome, trisomy 18, and trisomy 13, at a low false-positive rate. This can potentially reduce invasive diagnostic procedures and related fetal losses by 95%. Evidence supports clinical testing for these aneuploidies. PMID:22281937
Categorizing mistaken false positives in regulation of human and environmental health.
Hansen, Steffen Foss; Krayer von Krauss, Martin P; Tickner, Joel A
2007-02-01
One of the concerns often voiced by critics of the precautionary principle is that a widespread regulatory application of the principle will lead to a large number of false positives (i.e., over-regulation of minor risks and regulation of nonexisting risks). The present article proposes a general definition of a regulatory false positive, and seeks to identify case studies that can be considered authentic regulatory false positives. Through a comprehensive review of the science policy literature for proclaimed false positives and interviews with authorities on regulation and the precautionary principle we identified 88 cases. Following a detailed analysis of these cases, we found that few of the cases mentioned in the literature can be considered to be authentic false positives. As a result, we have developed a number of different categories for these cases of "mistaken false positives," including: real risks, "The jury is still out," nonregulated proclaimed risks, "Too narrow a definition of risk," and risk-risk tradeoffs. These categories are defined and examples are presented in order to illustrate their key characteristics. On the basis of our analysis, we were able to identify only four cases that could be defined as regulatory false positives in the light of today's knowledge and recognized uncertainty: the Southern Corn Leaf Blight, the Swine Flu, Saccharin, and Food Irradiation in relation to consumer health. We conclude that concerns about false positives do not represent a reasonable argument against future application of the precautionary principle.
Feasibility of Tidal and Ocean Current Energy in False Pass, Aleutian Islands, Alaska final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Bruce Albert
The Aleutian Pribilof Islands Association was awarded a U.S. Department of Energy Tribal Energy Program grant (DE-EE0005624) for the Feasibility of Tidal and Ocean Current Energy in False Pass, Aleutian Islands, Alaska (Project). The goal of the Project was to perform a feasibility study to determine if a tidal energy project would be a viable means to generate electricity and heat to meet long-term fossil fuel use reduction goals, specifically to produce at least 30% of the electrical and heating needs of the tribally-owned buildings in False Pass. The Project Team included the Aleut Region organizations comprised of the Aleutianmore » Pribilof Island Association (APIA), and Aleutian Pribilof Island Community Development Association (APICDA); the University of Alaska Anchorage, ORPC Alaska a wholly-owned subsidiary of Ocean Renewable Power Company (ORPC), City of False Pass, Benthic GeoScience, and the National Renewable Energy Laboratory (NREL). The following Project objectives were completed: collected existing bathymetric, tidal, and ocean current data to develop a basic model of current circulation at False Pass, measured current velocities at two sites for a full lunar cycle to establish the viability of the current resource, collected data on transmission infrastructure, electrical loads, and electrical generation at False Pass, performed economic analysis based on current costs of energy and amount of energy anticipated from and costs associated with the tidal energy project conceptual design and scoped environmental issues. Utilizing circulation modeling, the Project Team identified two target sites with strong potential for robust tidal energy resources in Isanotski Strait and another nearer the City of False Pass. In addition, the Project Team completed a survey of the electrical infrastructure, which identified likely sites of interconnection and clarified required transmission distances from the tidal energy resources. Based on resource and electrical data, the Project Team developed a conceptual tidal energy project design utilizing ORPC’s TidGen® Power System. While the Project Team has not committed to ORPC technology for future development of a False Pass project, this conceptual design was critical to informing the Project’s economic analysis. The results showed that power from a tidal energy project could be provided to the City of False at a rate at or below the cost of diesel generated electricity and sold to commercial customers at rates competitive with current market rates, providing a stable, flat priced, environmentally sound alternative to the diesel generation currently utilized for energy in the community. The Project Team concluded that with additional grants and private investment a tidal energy project at False Pass is well-positioned to be the first tidal energy project to be developed in Alaska, and the first tidal energy project to be interconnected to an isolated micro grid in the world. A viable project will be a model for similar projects in coastal Alaska.« less
An analysis of false positive reactions occurring with the Captia Syph G EIA.
Ross, J; Moyes, A; Young, H; McMillan, A
1991-01-01
AIM--The Captia Syph G enzyme immuno assay (EAI) offers the potential for the rapid automated detection of syphilis antibodies. This study was designed to assess the role of other sexually transmitted diseases (STDs) in producing false positive reactions in the Captia Syph G EIA. The role of rheumatoid factor (RF) as a potential source of false positives was also analysed. METHODS--Patients who attended a genitourinary medicine (GUM) department and gave a false positive reaction with the EIA between 1988 and 1990 were compared with women undergoing antenatal testing and with the control clinic population (EIA negative) over the same time period. The incidence of sexually transmitted disease (STD) in the clinic population and the false positive reactors was measured in relation to gonorrhoea, chlamydia, genital warts, candidiasis, "other conditions not requiring treatment" and "other conditions requiring treatment." Male: female sex ratios were also compared. Ninety two RF positive sera were analysed with the EIA. RESULTS--The rate of false positive reactions did not differ with respect to the diagnosis within the GUM clinic population. The antenatal group of women, however, had a lower incidence of false positive reactions than the GUM clinic group. No RF positive sera were positive on Captia Syph G EIA testing. CONCLUSIONS--There is no cross reaction between Captia Syph G EIA and any specific STD or with RF positive sera. The lower incidence of false positive reactions in antenatal women is unexplained but may be related to physiological changes associated with pregnancy. PMID:1743715
Use of General-purpose Negation Detection to Augment Concept Indexing of Medical Documents
Mutalik, Pradeep G.; Deshpande, Aniruddha; Nadkarni, Prakash M.
2001-01-01
Objectives: To test the hypothesis that most instances of negated concepts in dictated medical documents can be detected by a strategy that relies on tools developed for the parsing of formal (computer) languages—specifically, a lexical scanner (“lexer”) that uses regular expressions to generate a finite state machine, and a parser that relies on a restricted subset of context-free grammars, known as LALR(1) grammars. Methods: A diverse training set of 40 medical documents from a variety of specialties was manually inspected and used to develop a program (Negfinder) that contained rules to recognize a large set of negated patterns occurring in the text. Negfinder's lexer and parser were developed using tools normally used to generate programming language compilers. The input to Negfinder consisted of medical narrative that was preprocessed to recognize UMLS concepts: the text of a recognized concept had been replaced with a coded representation that included its UMLS concept ID. The program generated an index with one entry per instance of a concept in the document, where the presence or absence of negation of that concept was recorded. This information was used to mark up the text of each document by color-coding it to make it easier to inspect. The parser was then evaluated in two ways: 1) a test set of 60 documents (30 discharge summaries, 30 surgical notes) marked-up by Negfinder was inspected visually to quantify false-positive and false-negative results; and 2) a different test set of 10 documents was independently examined for negatives by a human observer and by Negfinder, and the results were compared. Results: In the first evaluation using marked-up documents, 8,358 instances of UMLS concepts were detected in the 60 documents, of which 544 were negations detected by the program and verified by human observation (true-positive results, or TPs). Thirteen instances were wrongly flagged as negated (false-positive results, or FPs), and the program missed 27 instances of negation (false-negative results, or FNs), yielding a sensitivity of 95.3 percent and a specificity of 97.7 percent. In the second evaluation using independent negation detection, 1,869 concepts were detected in 10 documents, with 135 TPs, 12 FPs, and 6 FNs, yielding a sensitivity of 95.7 percent and a specificity of 91.8 percent. One of the words “no,” “denies/denied,” “not,” or “without” was present in 92.5 percent of all negations. Conclusions: Negation of most concepts in medical narrative can be reliably detected by a simple strategy. The reliability of detection depends on several factors, the most important being the accuracy of concept matching. PMID:11687566
2012-01-01
Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
NASA Astrophysics Data System (ADS)
Manger, Daniel; Metzler, Jürgen
2014-03-01
Military Operations in Urban Terrain (MOUT) require the capability to perceive and to analyze the situation around a patrol in order to recognize potential threats. A permanent monitoring of the surrounding area is essential in order to appropriately react to the given situation, where one relevant task is the detection of objects that can pose a threat. Especially the robust detection of persons is important, as in MOUT scenarios threats usually arise from persons. This task can be supported by image processing systems. However, depending on the scenario, person detection in MOUT can be challenging, e.g. persons are often occluded in complex outdoor scenes and the person detection also suffers from low image resolution. Furthermore, there are several requirements on person detection systems for MOUT such as the detection of non-moving persons, as they can be a part of an ambush. Existing detectors therefore have to operate on single images with low thresholds for detection in order to not miss any person. This, in turn, leads to a comparatively high number of false positive detections which renders an automatic vision-based threat detection system ineffective. In this paper, a hybrid detection approach is presented. A combination of a discriminative and a generative model is examined. The objective is to increase the accuracy of existing detectors by integrating a separate hypotheses confirmation and rejection step which is built by a discriminative and generative model. This enables the overall detection system to make use of both the discriminative power and the capability to detect partly hidden objects with the models. The approach is evaluated on benchmark data sets generated from real-world image sequences captured during MOUT exercises. The extension shows a significant improvement of the false positive detection rate.
Cosmological constant in scale-invariant theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foot, Robert; Kobakhidze, Archil; Volkas, Raymond R.
2011-10-01
The incorporation of a small cosmological constant within radiatively broken scale-invariant models is discussed. We show that phenomenologically consistent scale-invariant models can be constructed which allow a small positive cosmological constant, providing certain relation between the particle masses is satisfied. As a result, the mass of the dilaton is generated at two-loop level. Another interesting consequence is that the electroweak symmetry-breaking vacuum in such models is necessarily a metastable ''false'' vacuum which, fortunately, is not expected to decay on cosmological time scales.
Porter, Stephen; Taylor, Kristian; Ten Brinke, Leanne
2008-01-01
Despite a large body of false memory research, little has addressed the potential influence of an event's emotional content on susceptibility to false recollections. The Paradoxical Negative Emotion (PNE) hypothesis predicts that negative emotion generally facilitates memory but also heightens susceptibility to false memories. Participants were asked whether they could recall 20 "widely publicised" public events (half fictitious) ranging in emotional valence, with or without visual cues. Participants recalled a greater number of true negative events (M=3.31/5) than true positive (M=2.61/5) events. Nearly everyone (95%) came to recall at least one false event (M=2.15 false events recalled). Further, more than twice as many participants recalled any false negative (90%) compared to false positive (41.7%) events. Negative events, in general, were associated with more detailed memories and false negative event memories were more detailed than false positive event memories. Higher dissociation scores were associated with false recollections of negative events, specifically.
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Wallis, Ilka; Pichler, Thomas
2018-08-01
Groundwater monitoring relies on the acquisition of 'representative' groundwater samples, which should reflect the ambient water quality at a given location. However, drilling of a monitoring well for sample acquisition has the potential to perturb groundwater conditions to a point that may prove to be detrimental to the monitoring objective. Following installation of 20 monitoring wells in close geographic proximity in central Florida, opposing concentration trends for As and Mo were observed. In the first year after well installation As and Mo concentrations increased in some wells by a factor of 2, while in others As and Mo concentrations decreased by a factor of up to 100. Given this relatively short period of time, a natural change in groundwater composition of such magnitude is not expected, leaving well installation itself as the likely cause for the observed concentration changes. Hence, initial concentrations were identified as 'false negatives' if concentrations increased with time or as 'false positives' if concentrations decreased. False negatives were observed if concentrations were already high, i.e., the As or Mo were present at the time of drilling. False positives were observed if concentrations were relatively lower, i.e., As or Mo were present at low concentrations of approximately 1 to 2μg/L before drilling, but then released from the aquifer matrix as a result of drilling. Generally, As and Mo were present in the aquifer matrix in either pyrite or organic matter, both of which are susceptible to dissolution if redox conditions change due to the addition of oxygen. Thus, introduction of an oxidant into an anoxic aquifer through use of an oxygen saturated drilling fluid served as the conceptual model for the trends where concentrations decreased with time. Mixing between drilling fluid and groundwater (i.e., dilution) was used as the conceptual model for scenarios where increasing trends were observed. Conceptual models were successfully tested through formulation and application of data-driven reactive transport models, using the USGS code MODFLOW in conjunction with the reactive multicomponent transport code PHT3D. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.
Accurate identification of peptides is a current challenge in mass spectrometry (MS) based proteomics. The standard approach uses a search routine to compare tandem mass spectra to a database of peptides associated with the target organism. These database search routines yield multiple metrics associated with the quality of the mapping of the experimental spectrum to the theoretical spectrum of a peptide. The structure of these results make separating correct from false identifications difficult and has created a false identification problem. Statistical confidence scores are an approach to battle this false positive problem that has led to significant improvements in peptidemore » identification. We have shown that machine learning, specifically support vector machine (SVM), is an effective approach to separating true peptide identifications from false ones. The SVM-based peptide statistical scoring method transforms a peptide into a vector representation based on database search metrics to train and validate the SVM. In practice, following the database search routine, a peptides is denoted in its vector representation and the SVM generates a single statistical score that is then used to classify presence or absence in the sample« less
Shem-Tov, Doron; Halperin, Eran
2014-06-01
Recent technological improvements in the field of genetic data extraction give rise to the possibility of reconstructing the historical pedigrees of entire populations from the genotypes of individuals living today. Current methods are still not practical for real data scenarios as they have limited accuracy and assume unrealistic assumptions of monogamy and synchronized generations. In order to address these issues, we develop a new method for pedigree reconstruction, [Formula: see text], which is based on formulations of the pedigree reconstruction problem as variants of graph coloring. The new formulation allows us to consider features that were overlooked by previous methods, resulting in a reconstruction of up to 5 generations back in time, with an order of magnitude improvement of false-negatives rates over the state of the art, while keeping a lower level of false positive rates. We demonstrate the accuracy of [Formula: see text] compared to previous approaches using simulation studies over a range of population sizes, including inbred and outbred populations, monogamous and polygamous mating patterns, as well as synchronous and asynchronous mating.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Texture analysis applied to second harmonic generation image data for ovarian cancer classification
NASA Astrophysics Data System (ADS)
Wen, Bruce L.; Brewer, Molly A.; Nadiarnykh, Oleg; Hocker, James; Singh, Vikas; Mackie, Thomas R.; Campagnola, Paul J.
2014-09-01
Remodeling of the extracellular matrix has been implicated in ovarian cancer. To quantitate the remodeling, we implement a form of texture analysis to delineate the collagen fibrillar morphology observed in second harmonic generation microscopy images of human normal and high grade malignant ovarian tissues. In the learning stage, a dictionary of "textons"-frequently occurring texture features that are identified by measuring the image response to a filter bank of various shapes, sizes, and orientations-is created. By calculating a representative model based on the texton distribution for each tissue type using a training set of respective second harmonic generation images, we then perform classification between images of normal and high grade malignant ovarian tissues. By optimizing the number of textons and nearest neighbors, we achieved classification accuracy up to 97% based on the area under receiver operating characteristic curves (true positives versus false positives). The local analysis algorithm is a more general method to probe rapidly changing fibrillar morphologies than global analyses such as FFT. It is also more versatile than other texture approaches as the filter bank can be highly tailored to specific applications (e.g., different disease states) by creating customized libraries based on common image features.
Is there a positive bias in false recognition? Evidence from confabulating amnesia patients.
Alkathiri, Nura H; Morris, Robin G; Kopelman, Michael D
2015-10-01
Although there is some evidence for a positive emotional bias in the content of confabulations in brain damaged patients, findings have been inconsistent. The present study used the semantic-associates procedure to induce false recall and false recognition in order to examine whether a positive bias would be found in confabulating amnesic patients, relative to non-confabulating amnesic patients and healthy controls. Lists of positive, negative and neutral words were presented in order to induce false recall or false recognition of non-presented (but semantically associated) words. The latter were termed 'critical intrusions'. Thirteen confabulating amnesic patients, 13 non-confabulating amnesic patients and 13 healthy controls were investigated. Confabulating patients falsely recognised a higher proportion of positive (but unrelated) words, compared with non-confabulating patients and healthy controls. No differences were found for recall memory. Signal detection analysis, however, indicated that the positive bias for false recognition memory might reflect weaker memory in the confabulating amnesic group. This suggested that amnesia patients with weaker memory are more likely to confabulate and the content of these confabulations are more likely to be positive. Copyright © 2015 Elsevier Ltd. All rights reserved.
Study of false positives in 5-ALA induced photodynamic diagnosis of bladder carcinoma
NASA Astrophysics Data System (ADS)
Draga, Ronald O. P.; Grimbergen, Matthijs C. M.; Kok, Esther T.; Jonges, Trudy G. N.; Bosch, J. L. H. R.
2009-02-01
Photodynamic diagnosis (PDD) is a technique that enhances the detection of tumors during cystoscopy using a photosensitizer which accumulates primarily in cancerous cells and will fluoresce when illuminated by violetblue light. A disadvantage of PDD is the relatively low specificity. In this retrospective study we aimed to identify predictors for false positive findings in PDD. Factors such as gender, age, recent transurethral resection of bladder tumors (TURBT), previous intravesical therapy (IVT) and urinary tract infections (UTIs) were examined for association with the false positive rates in a multivariate analysis. Data of 366 procedures and 200 patients were collected. Patients were instilled with 5-aminolevulinic acid (5-ALA) intravesically and 1253 biopsies were taken from tumors and suspicious lesions. Female gender and TURBT are independent predictors of false positives in PDD. However, previous intravesical therapy with Bacille Calmette-Guérin is also an important predictor of false positives. The false positive rate decreases during the first 9-12 weeks after the latest TURBT and the latest intravesical chemotherapy. Although shortly after IVT and TURBT false positives increase, PDD improves the diagnostic sensitivity and results in more adequate treatment strategies in a significant number of patients.
Vázquez-Avila, Isidro; Vera-Peralta, Jorge Manuel; Alvarez-Nemegyei, José; Rodríguez-Carvajal, Otilia
2007-01-01
In order to decrease the burden of suffering and the costs derived from confirmatory molecular assays, a better strategy is badly needed to decrease the rate of false positive results of the enzyme-linked immunoassay (ELISA) for detection of hepatitis C virus (HCV) antibodies (Anti). To establish the best cutoff of the S/CO rate in subjects with a positive result of a microparticule, third generation ELISA assay for Anti-HCV, for predicting viremia as detected by polymerase chain reaction (PCR) assay. Using the result of the PCR assay as "gold standard", a ROC curve was build with the results of the S/CO rate values in subjects with a positive result for ELISA HCV assay. Fifty two subjects (30 male, 22 female, 40 +/- 12.5 years old) were included. Thirty four (65.3%) had a positive RNA HCV PCR assay. The area under the curve was 0.99 (95% CI: 0.98-1.0). The optimal cutoff for the S/CO rate was established in 29: sensitivity: 97%; specificity: 100%: PPV: 100%; NPV: 94%. Setting the cutoff of the S/CO in 29 results in a high predictive value for viremia as detected by PCR in subjects with a positive ELISA HVC assay. This knowledge may result in a better decision taking for the clinical follow up of those subjects with a positive result in the ELISA screening assay for HCV infection.
a Three-Dimensional Simulation and Visualization System for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Cui, T.
2017-08-01
Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
miREE: miRNA recognition elements ensemble
2011-01-01
Background Computational methods for microRNA target prediction are a fundamental step to understand the miRNA role in gene regulation, a key process in molecular biology. In this paper we present miREE, a novel microRNA target prediction tool. miREE is an ensemble of two parts entailing complementary but integrated roles in the prediction. The Ab-Initio module leverages upon a genetic algorithmic approach to generate a set of candidate sites on the basis of their microRNA-mRNA duplex stability properties. Then, a Support Vector Machine (SVM) learning module evaluates the impact of microRNA recognition elements on the target gene. As a result the prediction takes into account information regarding both miRNA-target structural stability and accessibility. Results The proposed method significantly improves the state-of-the-art prediction tools in terms of accuracy with a better balance between specificity and sensitivity, as demonstrated by the experiments conducted on several large datasets across different species. miREE achieves this result by tackling two of the main challenges of current prediction tools: (1) The reduced number of false positives for the Ab-Initio part thanks to the integration of a machine learning module (2) the specificity of the machine learning part, obtained through an innovative technique for rich and representative negative records generation. The validation was conducted on experimental datasets where the miRNA:mRNA interactions had been obtained through (1) direct validation where even the binding site is provided, or through (2) indirect validation, based on gene expression variations obtained from high-throughput experiments where the specific interaction is not validated in detail and consequently the specific binding site is not provided. Conclusions The coupling of two parts: a sensitive Ab-Initio module and a selective machine learning part capable of recognizing the false positives, leads to an improved balance between sensitivity and specificity. miREE obtains a reasonable trade-off between filtering false positives and identifying targets. miREE tool is available online at http://didattica-online.polito.it/eda/miREE/ PMID:22115078
Karaceper, Maria D; Chakraborty, Pranesh; Coyle, Doug; Wilson, Kumanan; Kronick, Jonathan B; Hawken, Steven; Davies, Christine; Brownell, Marni; Dodds, Linda; Feigenbaum, Annette; Fell, Deshayne B; Grosse, Scott D; Guttmann, Astrid; Laberge, Anne-Marie; Mhanni, Aizeddin; Miller, Fiona A; Mitchell, John J; Nakhla, Meranda; Prasad, Chitra; Rockman-Greenberg, Cheryl; Sparkes, Rebecca; Wilson, Brenda J; Potter, Beth K
2016-02-03
There is no consensus in the literature regarding the impact of false positive newborn screening results on early health care utilization patterns. We evaluated the impact of false positive newborn screening results for medium-chain acyl-CoA dehydrogenase deficiency (MCADD) in a cohort of Ontario infants. The cohort included all children who received newborn screening in Ontario between April 1, 2006 and March 31, 2010. Newborn screening and diagnostic confirmation results were linked to province-wide health care administrative datasets covering physician visits, emergency department visits, and inpatient hospitalizations, to determine health service utilization from April 1, 2006 through March 31, 2012. Incidence rate ratios (IRRs) were used to compare those with false positive results for MCADD to those with negative newborn screening results, stratified by age at service use. We identified 43 infants with a false positive newborn screening result for MCADD during the study period. These infants experienced significantly higher rates of physician visits (IRR: 1.42) and hospitalizations (IRR: 2.32) in the first year of life relative to a screen negative cohort in adjusted analyses. Differences in health services use were not observed after the first year of life. The higher use of some health services among false positive infants during the first year of life may be explained by a psychosocial impact of false positive results on parental perceptions of infant health, and/or by differences in underlying health status. Understanding the impact of false positive newborn screening results can help to inform newborn screening programs in designing support and education for families. This is particularly important as additional disorders are added to expanded screening panels, yielding important clinical benefits for affected children but also a higher frequency of false positive findings.
Bi, Xiaohui; Ning, Hongxia; Wang, Tingting; Li, Dongdong; Liu, Yongming; Yang, Tingfu; Yu, Jiansheng; Tao, Chuanmin
2012-01-01
The recent approval of 4th generation HIV tests has forced many laboratories to decide whether to shift from 3rd to these tests. There are limited published studies on the comparative evaluation of these two different assays. We compare the performance of fourth-generation electrochemiluminescence immunoassay (ChIA) and third-generation enzyme linked immunosorbent assay (EIA) for human immunodeficiency virus (HIV) screening and gauge whether the shift from EIA to ChIA could be better in a multiethnic region of China. We identified a large number of routine specimens (345,492) using two different assays from Jan 2008 to Aug 2011 in a teaching hospital with high sample throughput. Of the 344,596 specimens with interpretable HIV test results, 526(0.23%) of 228,761 using EIA and 303(0.26%) of 115,835 using ChIA were HIV-1 positive. The false-positive rate of EIA was lower than that of ChIA [0.03% vs. 0.08%, odds ratio 0.33 (95% confidence interval 0.24, 0.45)]. The positive predictive value (PPV) of EIA (89.6%) was significantly higher than that of ChIA (76.1%) (<0.001), reflecting the difference between the two assays. The clinical sensitivities of two assays in this study were 99.64% for EIA and 99.88% for ChIA. Caution is needed before shifting from 3rd to 4th generation HIV tests. Since none of these tests are perfect, different geographic and ethnic area probably require different considerations with regard to HIV testing methods, taking into account the local conditions.
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina
2015-03-01
Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Accounting for false-positive acoustic detections of bats using occupancy models
Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.
2014-01-01
4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardan, R; Popple, R; Dobelbower, M
Purpose: To demonstrate the ability to quickly generate an accurate collision avoidance map using multiple stereotactic cameras during simulation. Methods: Three Kinect stereotactic cameras were placed in the CT simulation room and optically calibrated to the DICOM isocenter. Immediately before scanning, the patient was optically imaged to generate a 3D polygon mesh, which was used to calculate the collision avoidance area using our previously developed framework. The mesh was visually compared to the CT scan body contour to ensure accurate coordinate alignment. To test the accuracy of the collision calculation, the patient and machine were physically maneuvered in the treatmentmore » room to calculated collision boundaries. Results: The optical scan and collision calculation took 38.0 seconds and 2.5 seconds to complete respectively. The collision prediction accuracy was determined using a receiver operating curve (ROC) analysis, where the true positive, true negative, false positive and false negative values were 837, 821, 43, and 79 points respectively. The ROC accuracy was 93.1% over the sampled collision space. Conclusion: We have demonstrated a framework which is fast and accurate for predicting collision avoidance for treatment which can be determined during the normal simulation process. Because of the speed, the system could be used to add a layer of safety with a negligible impact on the normal patient simulation experience. This information could be used during treatment planning to explore the feasible geometries when optimizing plans. Research supported by Varian Medical Systems.« less
Mu, Wenbo; Lu, Hsiao-Mei; Chen, Jefferey; Li, Shuwei; Elliott, Aaron M
2016-11-01
Next-generation sequencing (NGS) has rapidly replaced Sanger sequencing as the method of choice for diagnostic gene-panel testing. For hereditary-cancer testing, the technical sensitivity and specificity of the assay are paramount as clinicians use results to make important clinical management and treatment decisions. There is significant debate within the diagnostics community regarding the necessity of confirming NGS variant calls by Sanger sequencing, considering that numerous laboratories report having 100% specificity from the NGS data alone. Here we report our results from 20,000 hereditary-cancer NGS panels spanning 47 genes, in which all 7845 nonpolymorphic variants were Sanger- sequenced. Of these, 98.7% were concordant between NGS and Sanger sequencing and 1.3% were identified as NGS false-positives, located mainly in complex genomic regions (A/T-rich regions, G/C-rich regions, homopolymer stretches, and pseudogene regions). Simulating a false-positive rate of zero by adjusting the variant-calling quality-score thresholds decreased the sensitivity of the assay from 100% to 97.8%, resulting in the missed detection of 176 Sanger-confirmed variants, the majority in complex genomic regions (n = 114) and mosaic mutations (n = 7). The data illustrate the importance of setting quality thresholds for panel testing only after thousands of samples have been processed and the necessity of Sanger confirmation of NGS variants to maintain the highest possible sensitivity. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Jorgensen, James H.; Salinas, Jesse R.; Paxson, Rosemary; Magnon, Karen; Patterson, Jan E.; Patterson, Thomas F.
1999-01-01
The Gen-Probe Amplified Mycobacterium Tuberculosis Direct (MTD) test has been approved for use in the United States for the rapid diagnosis of pulmonary tuberculosis in patients with acid-fast smear-positive sputum samples since 1996. Four patients infected with human immunodeficiency virus and one chronic pulmonary-disease patient seen in our institutions with abnormal chest radiographs and fluorochrome stain-positive sputa were evaluated for tuberculosis, including performance of the MTD test on expectorated sputum samples. Three of these five patients’ sputa were highly smear-positive (i.e., more than 100 bacilli per high-power field), while two patient’s sputa contained 1 to 10 bacilli per field. MTD results on sputum specimens from these patients ranged from 43,498 to 193,858 relative light units (RLU). Gen-Probe has defined values of at least 30,000 RLU as indicative of a positive test, i.e., the presence of Mycobacterium tuberculosis RNA. Four of the patients’ sputum cultures yielded growth of M. kansasii within 6 to 12 days, and the fifth produced growth of M. avium only. One patient’s culture contained both M. kansasii and M. avium, but none of the initial or follow-up cultures from these five patients revealed M. tuberculosis. However, subsequent cultures from three of the patients again revealed M. kansasii. During the period of this study, in which MTD tests were performed on smear-positive sputum specimens from 82 patients, four of seven patients with culture-proven M. kansasii pulmonary infections yielded one or more false-positive MTD tests. The MTD sensitivity observed in this study was 93.8%, and the specificity was 85.3%. Five cultures of M. kansasii (including three of these patients’ isolates and M. kansasii ATCC 12478), and cultures of several other species were examined at densities of 105 to 107 viable CFU/ml by the MTD test. All five isolates of M. kansasii and three of three isolates of M. simiae yielded false-positive test results, with readings of 75,191 to 335,591 RLU. These findings indicate that low-level false-positive MTD results can occur due to the presence of M. kansasii, M. avium, and possibly other Mycobacterium species other than M. tuberculosis in sputum. Low-level positive MTD results of 30,000 to 500,000 RLU should be interpreted in light of these findings. It remains to be determined if the enhanced MTD test (MTD 2) recently released by Gen-Probe will provide greater specificity than that observed in this report with its first-generation test. PMID:9854086
Analysis of false results in a series of 835 fine needle aspirates of breast lesions.
Willis, S L; Ramzy, I
1995-01-01
To analyze cases of false diagnoses from a large series to help increase the accuracy of fine needle aspiration of palpable breast lesions. The results of FNA of 835 palpable breast lesions were analyzed to determine the reasons for false positive, false negative and false suspicious diagnoses. Of the 835 aspirates, 174 were reported as positive, 549 as negative and 66 as suspicious or atypical but not diagnostic of malignancy. Forty-six cases were considered unsatisfactory. Tissue was available for comparison in 286 cases. The cytologic diagnoses in these cases were reported as follows: positive, 125 (43.7%); suspicious, 33 (11.5%); atypical, 18 (6.2%); negative, 92 (32%); and unsatisfactory, 18 (6.2%). There was one false positive diagnosis, yielding a false positive rate of 0.8%. This lesion was a case of fibrocystic change with hyperplasia, focal fat necrosis and reparative atypia. There were 14 false negative cases, resulting in a false negative rate of 13.2%. Nearly all these cases were sampling errors and included infiltrating ductal carcinomas (9), ductal carcinomas in situ (2), infiltrating lobular carcinomas (2) and tubular carcinoma (1). Most of the suspicious and atypical lesions proved to be carcinomas (35/50). The remainder were fibroadenomas (6), fibrocystic change (4), gynecomastia (2), adenosis (2) and granulomatous mastitis (1). A positive diagnosis of malignancy by FNA is reliable in establishing the diagnosis and planning the treatment of breast cancer. The false-positive rate is very low, with only a single case reported in 835 aspirates. Most false negatives are due to sampling and not to interpretive difficulties. The category "suspicious but not diagnostic of malignancy" serves a useful purpose in management of patients with breast lumps.
Hernández-Bou, S; Trenchs Sainz de la Maza, V; Esquivel Ojeda, J N; Gené Giralt, A; Luaces Cubells, C
2015-06-01
The aim of this study is to identify predictive factors of bacterial contamination in positive blood cultures (BC) collected in an emergency department. A prospective, observational and analytical study was conducted on febrile children aged on to 36 months, who had no risk factors of bacterial infection, and had a BC collected in the Emergency Department between November 2011 and October 2013 in which bacterial growth was detected. The potential BC contamination predicting factors analysed were: maximum temperature, time to positivity, initial Gram stain result, white blood cell count, absolute neutrophil count, band count, and C-reactive protein (CRP). Bacteria grew in 169 BC. Thirty (17.8%) were finally considered true positives and 139 (82.2%) false positives. All potential BC contamination predicting factors analysed, except maximum temperature, showed significant differences between true positives and false positives. CRP value, time to positivity, and initial Gram stain result are the best predictors of false positives in BC. The positive predictive values of a CRP value≤30mg/L, BC time to positivity≥16h, and initial Gram stain suggestive of a contaminant in predicting a FP, are 95.1, 96.9 and 97.5%, respectively. When all 3 conditions are applied, their positive predictive value is 100%. Four (8.3%) patients with a false positive BC and discharged to home were revaluated in the Emergency Department. The majority of BC obtained in the Emergency Department that showed positive were finally considered false positives. Initial Gram stain, time to positivity, and CRP results are valuable diagnostic tests in distinguishing between true positives and false positives in BC. The early detection of false positives will allow minimising their negative consequences. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.
Slater, Robert A; Koren, Shlomit; Ramot, Yoram; Buchs, Andreas; Rapoport, Micha J
2014-01-01
The Semmes-Weinstein monofilament is the most widely used test to diagnose the loss of protective sensation. The commonly used protocol of the International Consensus on the Diabetic Foot includes a 'sham' application that allows for false-positive answers. We sought to study the heretofore unexamined significance of false-positive answers. Forty-five patients with diabetes and a history of pedal ulceration (Group I) and 81 patients with diabetes but no history of ulceration (Group II) were studied. The three original sites of the International Consensus on the Diabetic Foot at the hallux, 1st metatarsal and 5th metatarsal areas were used. At each location, the test was performed three times: 2 actual and 1 "sham" applications. Scores were graded from 0 to 3 based upon correct responses. Determination of loss of protective sensation was performed with and without calculating a false-positive answer as a minus 1 score. False-positive responses were found in a significant percentage of patients with and without history of ulceration. Introducing false-positive results as minus 1 into the test outcome significantly increased the number of patients diagnosed with loss of protective sensation in both groups. False-positive answers can significantly affect Semmes-Weinstein monofilament test results and the diagnosis of LOPS. A model that accounts for false-positive answers is offered. Copyright © 2013 John Wiley & Sons, Ltd.
SME filter approach to multiple target tracking with false and missing measurements
NASA Astrophysics Data System (ADS)
Lee, Yong J.; Kamen, Edward W.
1993-10-01
The symmetric measurement equation (SME) filter for track maintenance in multiple target tracking is extended to the general case when there are an arbitrary unknown number of false and missing position measurements in the measurement set at any time point. It is assumed that the number N of targets is known a priori and that the target motions consist of random perturbations of constant-velocity trajectories. The key idea in the paper is to generate a new measurement vector from sums-of-products of the elements of 'feasible' N-element data vectors that pass a thresholding operation in the sums-of-products framework. Via this construction, the data association problem is completely avoided, and in addition, there is no need to identify which target measurements may correspond to false returns or which target measurements may be missing. A computer simulation of SME filter performance is given, including a comparison with the associated filter (a benchmark) and the joint probabilistic data association (JPDA) filter.
A false single nucleotide polymorphism generated by gene duplication compromises meat traceability.
Sanz, Arianne; Ordovás, Laura; Zaragoza, Pilar; Sanz, Albina; de Blas, Ignacio; Rodellar, Clementina
2012-07-01
Controlling meat traceability using SNPs is an effective method of ensuring food safety. We have analyzed several SNPs to create a panel for bovine genetic identification and traceability studies. One of these was the transversion g.329C>T (Genbank accession no. AJ496781) on the cytochrome P450 17A1 gene, which has been included in previously published panels. Using minisequencing reactions, we have tested 701 samples belonging to eight Spanish cattle breeds. Surprisingly, an excess of heterozygotes was detected, implying an extreme departure from Hardy-Weinberg equilibrium (P<0.001). By alignment analysis and sequencing, we detected that the g.329C>T SNP is a false positive polymorphism, which allows us to explain the inflated heterozygotic value. We recommend that this ambiguous SNP, as well as other polymorphisms located in this region, should not be used in identification, traceability or disease association studies. Annotation of these false SNPs should improve association studies and avoid misinterpretations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Automatic detection of lung vessel bifurcation in thoracic CT images
NASA Astrophysics Data System (ADS)
Maduskar, Pragnya; Vikal, Siddharth; Devarakota, Pandu
2011-03-01
Computer-aided diagnosis (CAD) systems for detection of lung nodules have been an active topic of research for last few years. It is desirable that a CAD system should generate very low false positives (FPs) while maintaining high sensitivity. This work aims to reduce the number of false positives occurring at vessel bifurcation point. FPs occur quite frequently on vessel branching point due to its shape which can appear locally spherical due to the intrinsic geometry of intersecting tubular vessel structures combined with partial volume effects and soft tissue attenuation appearance surrounded by parenchyma. We propose a model-based technique for detection of vessel branching points using skeletonization, followed by branch-point analysis. First we perform vessel structure enhancement using a multi-scale Hessian filter to accurately segment tubular structures of various sizes followed by thresholding to get binary vessel structure segmentation [6]. A modified Reebgraph [7] is applied next to extract the critical points of structure and these are joined by a nearest neighbor criterion to obtain complete skeletal model of vessel structure. Finally, the skeletal model is traversed to identify branch points, and extract metrics including individual branch length, number of branches and angle between various branches. Results on 80 sub-volumes consisting of 60 actual vessel-branching and 20 solitary solid nodules show that the algorithm identified correctly vessel branching points for 57 sub-volumes (95% sensitivity) and misclassified 2 nodules as vessel branch. Thus, this technique has potential in explicit identification of vessel branching points for general vessel analysis, and could be useful in false positive reduction in a lung CAD system.
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Kissling, Grace E.; Haseman, Joseph K.; Zeiger, Errol
2014-01-01
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP’s statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800 × 0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP’s decision making process, overstates the number of statistical comparisons made, and ignores that fact that that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus’ conclusion that such obvious responses merely “generate a hypothesis” rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. PMID:25261588
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won
2014-01-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188
The role of rehearsal and generation in false memory creation.
Marsh, Elizabeth J; Bower, Gordon H
2004-11-01
The current research investigated one possible mechanism underlying false memories in the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, participants who study lists of related words (e.g., "table, sitting, bench ...") frequently report detailed memories for the centrally related but non-presented critical lure (e.g., "chair"). One possibility is that participants covertly call to mind the critical non-presented lure during the study phase, and later misattribute memory for this internally generated event to its external presentation. To investigate this, the DRM paradigm was modified to allow collection of on-line thoughts during the study phase. False recognition increased following generation during study. False recognition also increased following study of longer lists; this effect was partially explained by the fact that longer lists were more likely to elicit generations of the critical lure during study. Generation of the lure during study contributes to later false recognition, although it does not explain the entire effect.
Determination of cyclic volatile methylsiloxanes in personal care products by gas chromatography.
Brothers, H M; Boehmer, T; Campbell, R A; Dorn, S; Kerbleski, J J; Lewis, S; Mund, C; Pero, D; Saito, K; Wieser, M; Zoller, W
2017-12-01
Organosiloxanes are prevalent in personal care products (PCPs) due to the desired properties they impart in the usage and application of such products. However, the European Chemical Agency (ECHA) has recently published restriction proposals on the amount of two cyclic siloxanes, octamethylcyclotetrasiloxane (D4) and decamethylcyclotetrasiloxane (D5), allowed in wash off products such as shampoos and conditioners which are discharged down the drain during consumer use. This legislation will require that reliable analytical methods are available for manufacturers and government agencies to use in documenting compliance with the restrictions. This article proposes a simple analytical method to enable accurate measurement of these compounds down to the circa 0.1 weight per cent level in PCPs. Although gas chromatography methods are reported in the literature for quantitation of D4 and D5 in several matrices including PCPs, the potential for generation of false positives due to contamination, co-elution and in situ generation of cyclic volatile methylsiloxanes (cVMS) is always present and needs to be controlled. This report demonstrates the applicability of using a combination of emulsion break, liquid-liquid extraction and silylation sample preparation followed by GC-FID analysis as a suitable means of analysing PCPs for specific cVMS. The reliability and limitations of such methodology were demonstrated through several round-robin studies conducted in the laboratories of a consortium of silicone manufacturers. In addition, this report presents examples of false positives encountered during development of the method and presents a comparative analysis between this method and a published QuEChERS sample preparation procedure to illustrate the potential for generation of false positives when an inappropriate approach is applied to determination of cVMS in personal care products. This report demonstrates that an approach to determine cVMS levels in personal care products is to perform an emulsion break on the sample, isolate the non-polar phase from the emulsion break and treat with a silylation reagent to abate potential in situ formation of cyclics during the course of GC-FID analysis. Round-robin studies conducted in laboratories representing multiple siloxane manufacturers demonstrated the reliability of the GC-FID method when measuring cVMS in PCPs down to circa 0.1%. © 2017 CES - Silicones Europe. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and the Société Française de Cosmétologie.
X-Ray Phantom Development For Observer Performance Studies
NASA Astrophysics Data System (ADS)
Kelsey, C. A.; Moseley, R. D.; Mettler, F. A.; Parker, T. W.
1981-07-01
The requirements for radiographic imaging phantoms for observer performance testing include realistic tasks which mimic at least some portion of the diagnostic examination presented in a setting which approximates clinically derived images. This study describes efforts to simulate chest and vascular diseases for evaluation of conventional and digital radiographic systems. Images of lung nodules, pulmonary infiltrates, as well as hilar and mediastinal masses are generated with a conventional chest phantom to make up chest disease test series. Vascular images are simulated by hollow tubes embedded in tissue density plastic with widening and narrowing added to mimic aneurysms and stenoses. Both sets of phantoms produce images which allow simultaneous determination of true positive and false positive rates as well as complete ROC curves.
The problem of false positives and false negatives in violent video game experiments.
Ferguson, Christopher J
The problem of false positives and negatives has received considerable attention in behavioral research in recent years. The current paper uses video game violence research as an example of how such issues may develop in a field. Despite decades of research, evidence on whether violent video games (VVGs) contribute to aggression in players has remained mixed. Concerns have been raised in recent years that experiments regarding VVGs may suffer from both "false positives" and "false negatives." The current paper examines this issue in three sets of video game experiments, two sets of video game experiments on aggression and prosocial behaviors identified in meta-analysis, and a third group of recent null studies. Results indicated that studies of VVGs and aggression appear to be particularly prone to false positive results. Studies of VVGs and prosocial behavior, by contrast are heterogeneous and did not demonstrate any indication of false positive results. However, their heterogeneous nature made it difficult to base solid conclusions on them. By contrast, evidence for false negatives in null studies was limited, and little evidence emerged that null studies lacked power in comparison those highlighted in past meta-analyses as evidence for effects. These results are considered in light of issues related to false positives and negatives in behavioral science more broadly. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?
Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M. Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz
2015-01-01
Background Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Methodology/Principal Findings Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). Conclusion The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study. PMID:26161864
Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?
Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz
2015-01-01
Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study.
Bruijn, Merel M C; Hermans, Frederik J R; Vis, Jolande Y; Wilms, Femke F; Oudijk, Martijn A; Kwee, Anneke; Porath, Martina M; Oei, Guid; Scheepers, Hubertina C J; Spaanderman, Marc E A; Bloemenkamp, Kitty W M; Haak, Monique C; Bolte, Antoinette C; Vandenbussche, Frank P H A; Woiski, Mallory D; Bax, Caroline J; Cornette, Jérôme M J; Duvekot, Johannes J; Bijvank, Bas W A N I J; van Eyck, Jim; Franssen, Maureen T M; Sollie, Krystyna M; van der Post, Joris A M; Bossuyt, Patrick M M; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan
2017-02-01
Objective We assessed the influence of external factors on false-positive, false-negative, and invalid fibronectin results in the prediction of spontaneous delivery within 7 days. Methods We studied symptomatic women between 24 and 34 weeks' gestational age. We performed uni- and multivariable logistic regression to estimate the effect of external factors (vaginal soap, digital examination, transvaginal sonography, sexual intercourse, vaginal bleeding) on the risk of false-positive, false-negative, and invalid results, using spontaneous delivery within 7 days as the outcome. Results Out of 708 women, 237 (33%) had a false-positive result; none of the factors showed a significant association. Vaginal bleeding increased the proportion of positive fetal fibronectin (fFN) results, but was significantly associated with a lower risk of false-positive test results (odds ratio [OR], 0.22; 95% confidence intervals [CI], 0.12-0.39). Ten women (1%) had a false-negative result. None of the investigated factors was significantly associated with a significantly higher risk of false-negative results. Twenty-one tests (3%) were invalid; only vaginal bleeding showed a significant association (OR, 4.5; 95% CI, 1.7-12). Conclusion The effect of external factors on the performance of qualitative fFN testing is limited, with vaginal bleeding as the only factor that reduces its validity. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Spine detection in CT and MR using iterated marginal space learning.
Michael Kelm, B; Wels, Michael; Kevin Zhou, S; Seifert, Sascha; Suehling, Michael; Zheng, Yefeng; Comaniciu, Dorin
2013-12-01
Examinations of the spinal column with both, Magnetic Resonance (MR) imaging and Computed Tomography (CT), often require a precise three-dimensional positioning, angulation and labeling of the spinal disks and the vertebrae. A fully automatic and robust approach is a prerequisite for an automated scan alignment as well as for the segmentation and analysis of spinal disks and vertebral bodies in Computer Aided Diagnosis (CAD) applications. In this article, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the spinal disks. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Finally, we propose an optional case-adaptive segmentation approach that allows to segment the spinal disks and vertebrae in MR and CT respectively. Since the proposed approaches are learning-based, they can be trained for MR or CT alike. Experimental results based on 42 MR and 30 CT volumes show that our system not only achieves superior accuracy but also is among the fastest systems of its kind in the literature. On the MR data set the spinal disks of a whole spine are detected in 11.5s on average with 98.6% sensitivity and 0.073 false positive detections per volume. On the CT data a comparable sensitivity of 98.0% with 0.267 false positives is achieved. Detected disks are localized with an average position error of 2.4 mm/3.2 mm and angular error of 3.9°/4.5° in MR/CT, which is close to the employed hypothesis resolution of 2.1 mm and 3.3°. Copyright © 2012 Elsevier B.V. All rights reserved.
Ikeda, Kei; Narita, Akihiro; Ogasawara, Michihiro; Ohno, Shigeru; Kawahito, Yutaka; Kawakami, Atsushi; Ito, Hiromu; Matsushita, Isao; Suzuki, Takeshi; Misaki, Kenta; Ogura, Takehisa; Kamishima, Tamotsu; Seto, Yohei; Nakahara, Ryuichi; Kaneko, Atsushi; Nakamura, Takayuki; Henmi, Mihoko; Fukae, Jun; Nishida, Keiichiro; Sumida, Takayuki; Koike, Takao
2016-01-01
We aimed to identify causes of false-positives in ultrasound scanning of synovial/tenosynovial/bursal inflammation and provide corresponding imaging examples. We first performed systematic literature review to identify previously reported causes of false-positives. We next determined causes of false-positives and corresponding example images for educational material through Delphi exercises and discussion by 15 experts who were an instructor and/or a lecturer in the 2013 advanced course for musculoskeletal ultrasound organized by Japan College of Rheumatology Committee for the Standardization of Musculoskeletal Ultrasonography. Systematic literature review identified 11 articles relevant to sonographic false-positives of synovial/tenosynovial inflammation. Based on these studies, 21 candidate causes of false-positives were identified in the consensus meeting. Of these items, 11 achieved a predefined consensus (≥ 80%) in Delphi exercise and were classified as follows: (I) Gray-scale assessment [(A) non-specific synovial findings and (B) normal anatomical structures which can mimic synovial lesions due to either their low echogenicity or anisotropy]; (II) Doppler assessment [(A) Intra-articular normal vessels and (B) reverberation)]. Twenty-four corresponding examples with 49 still and 23 video images also achieved consensus. Our study provides a set of representative images that can help sonographers to understand false-positives in ultrasound scanning of synovitis and tenosynovitis.
Skin irritation, false positives and the local lymph node assay: a guideline issue?
Basketter, David A; Kimber, Ian
2011-10-01
Since the formal validation and regulatory acceptance of the local lymph node assay (LLNA) there have been commentaries suggesting that the irritant properties of substances can give rise to false positives. As toxicology aspires to progress rapidly towards the age of in vitro alternatives, it is of increasing importance that issues relating to assay selectivity and performance are understood fully, and that true false positive responses are distinguished clearly from those that are simply unpalatable. In the present review, we have focused on whether skin irritation per se is actually a direct cause of true false positive results in the LLNA. The body of published work has been examined critically and considered in relation to our current understanding of the mechanisms of skin irritation and skin sensitisation. From these analyses it is very clear that, of itself, skin irritation is not a cause of false positive results. The corollary is, therefore, that limiting test concentrations in the LLNA for the purpose of avoiding skin irritation may lead, unintentionally, to false negatives. Where a substance is a true false positive in the LLNA, the classic example being sodium lauryl sulphate, explanations for that positivity will have to reach beyond the seductive, but incorrect, recourse to its skin irritation potential. Copyright © 2011 Elsevier Inc. All rights reserved.
Keller, Karsten; Stelzer, Kathrin; Munzel, Thomas; Ostad, Mir Abolfazl
2016-12-01
Exercise echocardiography is a reliable routine test in patients with known or suspected coronary artery disease. However, in ∼15% of all patients, stress echocardiography leads to false-positive stress echocardiography results. We aimed to investigate the impact of hypertension on stress echocardiographic results. We performed a retrospective study of patients with suspected or known stable coronary artery disease who underwent a bicycle exercise stress echocardiography. Patients with false-positive stress results were compared with those with appropriate results. 126 patients with suspected or known coronary artery disease were included in this retrospective study. 23 patients showed false-positive stress echocardiography results. Beside comparable age, gender distribution and coronary artery status, hypertension was more prevalent in patients with false-positive stress results (95.7% vs. 67.0%, p = 0.0410). Exercise peak load revealed a borderline-significance with lower loads in patients with false-positive results (100.0 (IQR 75.0/137.5) vs. 125.0 (100.0/150.0) W, p = 0.0601). Patients with false-positive stress results showed higher systolic (2.05 ± 0.69 vs. 1.67 ± 0.39 mmHg/W, p = 0.0193) and diastolic (1.03 ± 0.38 vs. 0.80 ± 0.28 mmHg/W, p = 0.0165) peak blood pressure (BP) per wattage. In a multivariate logistic regression test, hypertension (OR 17.6 [CI 95% 1.9-162.2], p = 0.0115), and systolic (OR 4.12 [1.56-10.89], p = 0.00430) and diastolic (OR 13.74 [2.46-76.83], p = 0.00285) peak BP per wattage, were associated with false-positive exercise results. ROC analysis for systolic and diastolic peak BP levels per wattage showed optimal cut-off values of 1.935mmHg/W and 0.823mmHg/W, indicating false-positive exercise echocardiographic results with AUCs of 0.660 and 0.664, respectively. Hypertension is a risk factor for false-positive stress exercise echocardiographic results in patients with known or suspected coronary artery disease. Presence of hypertension was associated with 17.6-fold elevated risk of false-positive results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Weili; Kim, Joshua P.; Kadbi, Mo
2015-11-01
Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessedmore » by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain.« less
Zheng, Weili; Kim, Joshua P; Kadbi, Mo; Movsas, Benjamin; Chetty, Indrin J; Glide-Hurst, Carri K
2015-11-01
To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone-air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.
Flanagan, Emma C; Wong, Stephanie; Dutt, Aparna; Tu, Sicong; Bertoux, Maxime; Irish, Muireann; Piguet, Olivier; Rao, Sulakshana; Hodges, John R; Ghosh, Amitabha; Hornberger, Michael
2016-01-01
Episodic memory recall processes in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) can be similarly impaired, whereas recognition performance is more variable. A potential reason for this variability could be false-positive errors made on recognition trials and whether these errors are due to amnesia per se or a general over-endorsement of recognition items regardless of memory. The current study addressed this issue by analysing recognition performance on the Rey Auditory Verbal Learning Test (RAVLT) in 39 bvFTD, 77 AD and 61 control participants from two centers (India, Australia), as well as disinhibition assessed using the Hayling test. Whereas both AD and bvFTD patients were comparably impaired on delayed recall, bvFTD patients showed intact recognition performance in terms of the number of correct hits. However, both patient groups endorsed significantly more false-positives than controls, and bvFTD and AD patients scored equally poorly on a sensitivity index (correct hits-false-positives). Furthermore, measures of disinhibition were significantly associated with false positives in both groups, with a stronger relationship with false-positives in bvFTD. Voxel-based morphometry analyses revealed similar neural correlates of false positive endorsement across bvFTD and AD, with both patient groups showing involvement of prefrontal and Papez circuitry regions, such as medial temporal and thalamic regions, and a DTI analysis detected an emerging but non-significant trend between false positives and decreased fornix integrity in bvFTD only. These findings suggest that false-positive errors on recognition tests relate to similar mechanisms in bvFTD and AD, reflecting deficits in episodic memory processes and disinhibition. These findings highlight that current memory tests are not sufficient to accurately distinguish between bvFTD and AD patients.
False-positive buprenorphine EIA urine toxicology results due to high dose morphine: a case report.
Tenore, Peter L
2012-01-01
In monitoring a patient with chronic pain who was taking high-dose morphine and oxycodone with weekly urine enzymatic immunoassay (EIA) toxicology testing, the authors noted consistent positives for buprenorphine. The patient was not taking buprenorphine, and gas chromatography/mass spectroscopy (GCMS) testing on multiple samples revealed no buprenorphine, indicating a case of false-positive buprenorphine EIAs in a high-dose opiate case. The authors discontinued oxycodone for a period of time and then discontinued morphine. Urine monitoring with EIAs and GCMS revealed false-positive buprenorphine EIAs, which remained only when the patient was taking morphine. When taking only oxycodone and no morphine, urine samples became buprenorphine negative. When morphine was reintroduced, false-positive buprenorphine results resumed. Medical practitioners should be aware that high-dose morphine (with morphine urine levels turning positive within the 15,000 to 28,000 mg/mL range) may produce false-positive buprenorphine EIAs with standard urine EIA toxicology testing.
Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.
Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei
2016-02-01
GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mitchell, Elizabeth O; Stewart, Greg; Bajzik, Olivier; Ferret, Mathieu; Bentsen, Christopher; Shriver, M Kathleen
2013-12-01
A multisite study was conducted to evaluate the performance of the Bio-Rad 4th generation GS HIV Combo Ag/Ab EIA versus Abbott 4th generation ARCHITECT HIV Ag/Ab Combo. The performance of two 3rd generation EIAs, Ortho Diagnostics Anti-HIV 1+2 EIA and Siemens HIV 1/O/2 was also evaluated. Study objective was comparison of analytical HIV-1 p24 antigen detection, sensitivity in HIV-1 seroconversion panels, specificity in blood donors and two HIV false reactive panels. Analytical sensitivity was evaluated with International HIV-1 p24 antigen standards, the AFFSAPS (pg/mL) and WHO 90/636 (IU/mL) standards; sensitivity in acute infection was compared on 55 seroconversion samples, and specificity was evaluated on 1000 negative blood donors and two false reactive panels. GS HIV Combo Ag/Ab demonstrated better analytical HIV antigen sensitivity compared to ARCHITECT HIV Ag/Ab Combo: 0.41 IU/mL versus 1.2 IU/mL (WHO) and 12.7 pg/mL versus 20.1 pg/mL (AFSSAPS); GS HIV Combo Ag/Ab EIA also demonstrated slightly better specificity compared to ARCHITECT HIV Ag/Ab Combo (100% versus 99.7%). The 4th generation HIV Combo tests detected seroconversion 7-11 days earlier than the 3rd generation HIV antibody only EIAs. Both 4th generation immunoassays demonstrated excellent performance in sensitivity, with the reduction of the serological window period (7-11 days earlier detection than the 3rd generation HIV tests). However, GS HIV Combo Ag/Ab demonstrated improved HIV antigen analytical sensitivity and slightly better specificity when compared to ARCHITECT HIV Ag/Ab Combo assay, with higher positive predictive values (PPV) for low prevalence populations. Copyright © 2013 Elsevier B.V. All rights reserved.
Sievert, Lynnette L; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E
2010-01-01
The aims of this study were to test for a diurnal pattern in hot flashes in a multiethnic population living in a hot, humid environment and to examine the rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Study participants aged 45 to 55 years were recruited from the general population of Hilo, HI. Women wore a Biolog hot flash monitor (UFI, Morro Bay, CA), kept a diary for 24 hours, and also participated in 3-hour laboratory measures (n = 199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true-positive readings. Subjective hot flashes were considered the standard for computing false-positive and false-negative readings. True-positive, false-positive, and false-negative readings were compared across ethnic groups by chi analyses. Frequencies of sternal, nuchal, and subjective hot flashes peaked at 1500 +/- 1 hours with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true-positive measures, but nonsternal electrodes picked up more false-positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true-positive or false-positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false-negative measures). The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo were similar to results from more temperate environments. Lack of variation in sternal versus nonsternal measures and in true-positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns.
Sievert, Lynnette L.; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E.
2010-01-01
Objective To test for a diurnal pattern in hot flashes in a multi-ethnic population living in a hot, humid environment. To examine rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Methods Study participants aged 45–55 were recruited from the general population of Hilo, Hawaii. Women wore a Biolog hot flash monitor, kept a diary for 24-hours, and also participated in 3-hour laboratory measures (n=199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true positive readings. Subjective hot flashes were considered the standard for computing false positive and false negative readings. True positive, false positive, and false negative readings were compared across ethnic groups by chi-square analyses. Results Frequencies of sternal, nuchal and subjective hot flashes peaked at 15:00 ± 1 hour with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true positive measures, but non-sternal electrodes picked up more false positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true positive or false positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false negative measures). Conclusions The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo was similar to results from more temperate environments. Lack of variation in sternal vs. non-sternal measures, and in true positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns. PMID:20220538
Li, Bingshan; Leal, Suzanne M.
2008-01-01
Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al. [1] that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for autosomal recessive consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. False-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage, and which family members aid in its reduction, is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. For the situation, when parental genotypes are unavailable, false-positive evidence for linkage can be reduced by including genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents in the analysis. PMID:18073490
Zheng, S; Lin, R J; Chan, Y H; Ngan, C C L
2018-03-01
There is no clear consensus on the diagnosis of neurosyphilis. The Venereal Disease Research Laboratory (VDRL) test from cerebrospinal fluid (CSF) has traditionally been considered the gold standard for diagnosing neurosyphilis but is widely known to be insensitive. In this study, we compared the clinical and laboratory characteristics of true-positive VDRL-CSF cases with biological false-positive VDRL-CSF cases. We retrospectively identified cases of true and false-positive VDRL-CSF across a 3-year period received by the Immunology and Serology Laboratory, Singapore General Hospital. A biological false-positive VDRL-CSF is defined as a reactive VDRL-CSF with a non-reactive Treponema pallidum particle agglutination (TPPA)-CSF and/or negative Line Immuno Assay (LIA)-CSF IgG. A true-positive VDRL-CSF is a reactive VDRL-CSF with a concordant reactive TPPA-CSF and/or positive LIA-CSF IgG. During the study period, a total of 1254 specimens underwent VDRL-CSF examination. Amongst these, 60 specimens from 53 patients tested positive for VDRL-CSF. Of the 53 patients, 42 (79.2%) were true-positive cases and 11 (20.8%) were false-positive cases. In our setting, a positive non-treponemal serology has 97.6% sensitivity, 100% specificity, 100% positive predictive value and 91.7% negative predictive value for a true-positive VDRL-CSF based on our laboratory definition. HIV seropositivity was an independent predictor of a true-positive VDRL-CSF. Biological false-positive VDRL-CSF is common in a setting where patients are tested without first establishing a serological diagnosis of syphilis. Serological testing should be performed prior to CSF evaluation for neurosyphilis. © 2017 European Academy of Dermatology and Venereology.
2011-01-01
Background The entomological inoculation rate (EIR) is an important indicator in estimating malaria transmission and the impact of vector control. To assess the EIR, the enzyme-linked immunosorbent assay (ELISA) to detect the circumsporozoite protein (CSP) is increasingly used. However, several studies have reported false positive results in this ELISA. The false positive results could lead to an overestimation of the EIR. The aim of present study was to estimate the level of false positivity among different anopheline species in Cambodia and Vietnam and to check for the presence of other parasites that might interact with the anti-CSP monoclonal antibodies. Methods Mosquitoes collected in Cambodia and Vietnam were identified and tested for the presence of sporozoites in head and thorax by using CSP-ELISA. ELISA positive samples were confirmed by a Plasmodium specific PCR. False positive mosquitoes were checked by PCR for the presence of parasites belonging to the Haemosporidia, Trypanosomatidae, Piroplasmida, and Haemogregarines. The heat-stability and the presence of the cross-reacting antigen in the abdomen of the mosquitoes were also checked. Results Specimens (N = 16,160) of seven anopheline species were tested by CSP-ELISA for Plasmodium falciparum and Plasmodium vivax (Pv210 and Pv247). Two new vector species were identified for the region: Anopheles pampanai (P. vivax) and Anopheles barbirostris (Plasmodium malariae). In 88% (155/176) of the mosquitoes found positive with the P. falciparum CSP-ELISA, the presence of Plasmodium sporozoites could not be confirmed by PCR. This percentage was much lower (28% or 5/18) for P. vivax CSP-ELISAs. False positive CSP-ELISA results were associated with zoophilic mosquito species. None of the targeted parasites could be detected in these CSP-ELISA false positive mosquitoes. The ELISA reacting antigen of P. falciparum was heat-stable in CSP-ELISA true positive specimens, but not in the false positives. The heat-unstable cross-reacting antigen is mainly present in head and thorax and almost absent in the abdomens (4 out of 147) of the false positive specimens. Conclusion The CSP-ELISA can considerably overestimate the EIR, particularly for P. falciparum and for zoophilic species. The heat-unstable cross-reacting antigen in false positives remains unknown. Therefore it is highly recommended to confirm all positive CSP-ELISA results, either by re-analysing the heated ELISA lysate (100°C, 10 min), or by performing Plasmodium specific PCR followed if possible by sequencing of the amplicons for Plasmodium species determination. PMID:21767376
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... false positive match rate of 10 percent. Making the match mandatory for the States who did not perform... number of prisoners from 1995 to 2013 and assumed a 10 percent false positive match rate. Finally, we... matches are false positives. We estimate that mandatory matches at certification will identify an...
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G
2018-03-01
To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Yoon, Jung Hyun; Jung, Hae Kyoung; Lee, Jong Tae; Ko, Kyung Hee
2013-09-01
To investigate the factors that have an effect on false-positive or false-negative shear-wave elastography (SWE) results in solid breast masses. From June to December 2012, 222 breast lesions of 199 consecutive women (mean age: 45.3 ± 10.1 years; range, 21 to 88 years) who had been scheduled for biopsy or surgical excision were included. Greyscale ultrasound and SWE were performed in all women before biopsy. Final ultrasound assessments and SWE parameters (pattern classification and maximum elasticity) were recorded and compared with histopathology results. Patient and lesion factors in the 'true' and 'false' groups were compared. Of the 222 masses, 175 (78.8 %) were benign, and 47 (21.2 %) were malignant. False-positive rates of benign masses were significantly higher than false-negative rates of malignancy in SWE patterns, 36.6 % to 6.4 % (P < 0.001). Among both benign and malignant masses, factors showing significance among false SWE features were lesion size, breast thickness and lesion depth (all P < 0.05). All 47 malignant breast masses had SWE images of good quality. False SWE features were more significantly seen in benign masses. Lesion size, breast thickness and lesion depth have significance in producing false results, and this needs consideration in SWE image acquisition. • Shear-wave elastography (SWE) is widely used during breast imaging • At SWE, false-positive rates were significantly higher than false-negative rates • Larger size, breast thickness, depth and fair quality influences false-positive SWE features • Smaller size, larger breast thickness and depth influences false-negative SWE features.
Rogier, Eric; Plucinski, Mateusz; Lucchi, Naomi; Mace, Kimberly; Chang, Michelle; Lemoine, Jean Frantz; Candrinho, Baltazar; Colborn, James; Dimbu, Rafael; Fortes, Filomeno; Udhayakumar, Venkatachalam; Barnwell, John
2017-01-01
Detection of histidine-rich protein 2 (HRP2) from the malaria parasite Plasmodium falciparum provides evidence for active or recent infection, and is utilized for both diagnostic and surveillance purposes, but current laboratory immunoassays for HRP2 are hindered by low sensitivities and high costs. Here we present a new HRP2 immunoassay based on antigen capture through a bead-based system capable of detecting HRP2 at sub-picogram levels. The assay is highly specific and cost-effective, allowing fast processing and screening of large numbers of samples. We utilized the assay to assess results of HRP2-based rapid diagnostic tests (RDTs) in different P. falciparum transmission settings, generating estimates for true performance in the field. Through this method of external validation, HRP2 RDTs were found to perform well in the high-endemic areas of Mozambique and Angola with 86.4% and 73.9% of persons with HRP2 in their blood testing positive by RDTs, respectively, and false-positive rates of 4.3% and 0.5%. However, in the low-endemic setting of Haiti, only 14.5% of persons found to be HRP2 positive by the bead assay were RDT positive. Additionally, 62.5% of Haitians showing a positive RDT test had no detectable HRP2 by the bead assay, likely indicating that these were false positive tests. In addition to RDT validation, HRP2 biomass was assessed for the populations in these different settings, and may provide an additional metric by which to estimate P. falciparum transmission intensity and measure the impact of interventions. PMID:28192523
Conway, Damian P; Holt, Martin; McNulty, Anna; Couldwell, Deborah L; Smith, Don E; Davies, Stephen C; Cunningham, Philip; Keen, Phillip; Guy, Rebecca
2014-01-01
Determine HIV Combo (DHC) is the first point of care assay designed to increase sensitivity in early infection by detecting both HIV antibody and antigen. We conducted a large multi-centre evaluation of DHC performance in Sydney sexual health clinics. We compared DHC performance (overall, by test component and in early infection) with conventional laboratory HIV serology (fourth generation screening immunoassay, supplementary HIV antibody, p24 antigen and Western blot tests) when testing gay and bisexual men attending four clinic sites. Early infection was defined as either acute or recent HIV infection acquired within the last six months. Of 3,190 evaluation specimens, 39 were confirmed as HIV-positive (12 with early infection) and 3,133 were HIV-negative by reference testing. DHC sensitivity was 87.2% overall and 94.4% and 0% for the antibody and antigen components, respectively. Sensitivity in early infection was 66.7% (all DHC antibody reactive) and the DHC antigen component detected none of nine HIV p24 antigen positive specimens. Median HIV RNA was higher in false negative than true positive cases (238,025 vs. 37,591 copies/ml; p = 0.022). Specificity overall was 99.4% with the antigen component contributing to 33% of false positives. The DHC antibody component detected two thirds of those with early infection, while the DHC antigen component did not enhance performance during point of care HIV testing in a high risk clinic-based population.
Garrison, Louis P; Babigumira, Joseph B; Masaquel, Anthony; Wang, Bruce C M; Lalla, Deepa; Brammer, Melissa
2015-06-01
Patients with breast cancer whose tumors test positive for human epidermal growth factor receptor 2 (HER2) are treated with HER2-targeted therapies such as trastuzumab, but limitations with HER2 testing may lead to false-positive (FP) or false-negative (FN) results. To develop a US-level model to estimate the effect of tumor misclassification on health care costs and patient quality-adjusted life-years (QALYs). Decision analysis was used to estimate the number of patients with early-stage breast cancer (EBC) whose HER2 status was misclassified in 2012. FP results were assumed to generate unnecessary trastuzumab costs and unnecessary cases of trastuzumab-related cardiotoxicity. FN results were assumed to save money on trastuzumab, but with a loss of QALYs and greater risk of disease recurrence and its associated costs. QALYs were valued at $100,000 under a net monetary benefit approach. Among 226,870 women diagnosed with EBC in 2012, 3.12% (n = 7,070) and 2.18% (n = 4,955) were estimated to have had FP and FN test results, respectively. Approximately 8400 QALYs (discounted, lifetime) were lost among women not receiving trastuzumab because of FN results. The estimated incremental per-patient lifetime burden of FP or FN results was $58,900 and $116,000, respectively. The implied incremental losses to society were $417 million and $575 million, respectively. HER2 tests result in misclassification and nonoptimal treatment of approximately 12,025 US patients with EBC annually. The total economic societal loss of nearly $1 billion suggests that improvements in HER2 testing accuracy are needed and that further clinical and economic studies are warranted. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Performance Evaluation of New-Generation Pulse Oximeters in the NICU: Observational Study.
Nizami, Shermeen; Greenwood, Kim; Barrowman, Nick; Harrold, JoAnn
2015-09-01
This crossover observational study compares the data characteristics and performance of new-generation Nellcor OXIMAX and Masimo SET SmartPod pulse oximeter technologies. The study was conducted independent of either original equipment manufacturer (OEM) across eleven preterm infants in a Neonatal Intensive Care Unit (NICU). The SmartPods were integrated with Dräger Infinity Delta monitors. The Delta monitor measured the heart rate (HR) using an independent electrocardiogram sensor, and the two SmartPods collected arterial oxygen saturation (SpO2) and pulse rate (PR). All patient data were non-Gaussian. Nellcor PR showed a higher correlation with the HR as compared to Masimo PR. The statistically significant difference found in their median values (1% for SpO2, 1 bpm for PR) was deemed clinically insignificant. SpO2 alarms generated by both SmartPods were observed and categorized for performance evaluation. Results for sensitivity, positive predictive value, accuracy and false alarm rates were Nellcor (80.3, 50, 44.5, 50%) and Masimo (72.2, 48.2, 40.6, 51.8%) respectively. These metrics were not statistically significantly different between the two pulse oximeters. Despite claims by OEMs, both pulse oximeters exhibited high false alarm rates, with no statistically or clinically significant difference in performance. These findings have a direct impact on alarm fatigue in the NICU. Performance evaluation studies can also impact medical device purchase decisions made by hospital administrators.
Late lessons from early warnings: towards precaution and realism in research and policy.
Gee, D; Krayer von Krauss, M P
2005-01-01
This paper focuses on the evidentiary aspects of the precautionary principle. Three points are highlighted: (i) the difference between association and causation; (ii) how the strength of scientific evidence can be considered; and (iii) the reasons why regulatory regimes tend to err in the direction of false negatives rather than false positives. The point is made that because obtaining evidence of causation can take many decades of research, the precautionary principle can be invoked to justify action when evidence of causation is not available, but there is good scientific evidence of an association between exposures and impacts. It is argued that the appropriate level of proof is context dependent, as "appropriateness" is based on value judgements about the acceptability of the costs, about the distribution of the costs, and about the consequences of being wrong. A complementary approach to evaluating the strength of scientific evidence is to focus on the level of uncertainty. If decision makers are made aware of the limitations of the knowledge base, they can compensate by adopting measures aimed at providing early warnings of un-anticipated effects and mitigating their impacts. The point is made that it is often disregarded that the Bradford Hill criteria for evaluating evidence are asymmetrical, in that the applicability of a criterion increases the strength of evidence on the presence of an effect, but the inapplicability of a criterion does not increase the strength of evidence on the absence of an effect. The paper discusses the reason why there are so many examples of regulatory "false negatives" as opposed to "false positives". Two main reasons are put forward: (i) the methodological bias within the health and environmental sciences; and (ii) the dominance within decision-making of short term economic and political interests. Sixteen features of methods and culture in the environmental and health sciences are presented. Of these, only three features tend to generate "false positives". It is concluded that although the different features of scientific methods and culture produce robust science, they can lead to poor regulatory decisions on hazard prevention.
Ji-Wook Jeong; Seung-Hoon Chae; Eun Young Chae; Hak Hee Kim; Young Wook Choi; Sooyeul Lee
2016-08-01
A computer-aided detection (CADe) algorithm for clustered microcalcifications (MCs) in reconstructed digital breast tomosynthesis (DBT) images is suggested. The MC-like objects were enhanced by a Hessian-based 3D calcification response function, and a signal-to-noise ratio (SNR) enhanced image was also generated to screen the MC clustering seed objects. A connected component segmentation method was used to detect the cluster seed objects, which were considered as potential clustering centers of MCs. Bounding cubes for the accepted clustering seed candidate were generated and the overlapping cubes were combined and examined. After the MC clustering and false-positive (FP) reduction step, the average number of FPs was estimated to be 0.87 per DBT volume with a sensitivity of 90.5%.
Weirather, Jason L; Afshar, Pegah Tootoonchi; Clark, Tyson A; Tseng, Elizabeth; Powers, Linda S; Underwood, Jason G; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-10-15
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Mordang, Jan-Jurre; Gubern-Mérida, Albert; den Heeten, Gerard; Karssemeijer, Nico
2016-04-01
In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists' detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performance of the CADe system in finding malignant microcalcifications. A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8-1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.
Effects of depressive disorder on false memory for emotional information.
Yeh, Zai-Ting; Hua, Mau-Sun
2009-01-01
This study explored with a false memory paradigm whether (1) depressed patients revealed more false memories and (2) whether more negative false than positive false recognition existed in subjects with depressive disorders. Thirty-two patients suffering from a major depressive episode (DSM-IV criteria), and 30 age- and education-matched normal control subjects participated in this study. After the presentation of a list of positive, negative, and neutral association items in the learning phase, subjects were asked to give a yes/no response in the recognition phase. They were also asked to rate 81 recognition items with emotional valence scores. The results revealed more negative false memories in the clinical depression group than in the normal control group; however, we did not find more negative false memories than positive ones in patients. When compared with the normal group, a more conservative response criterion for positive items was evident in patient groups. It was also found that when compared with the normal group, the subjects in the depression group perceived the positive items as less positive. On the basis of present results, it is suggested that depressed subjects judged the emotional information with criteria different from normal individuals, and patients' emotional memory intensity is attenuated by their mood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, J.E. Jr.; Platoff, G.E.; Kubrock, C.A.
1982-01-01
Among 17 men who had received seemingly curative treatment for unilateral non-seminomatous germ cell tumors for the testis and who had consistently normal serum human chorionic gonadotropin (HCG) levels at a reference laboratory, 7 (41%) had at least one falsely positive commercial serum HCG determination. To investigate the cause of these falsely positive determinations the authors measured the cross reactivity of luteinizing hormone (LH) and follicle stimulating hormone (FSH) standards in the commercial HCG assay, and studied the relationships between commercial HCG levels and serum LH levels, serum FSH levels and gonadal status in men with and without normal gonadalmore » function. The falsely positive HCG determinations appeared to be due to elevated serum LH levels and cross reactivity of LH in the commercial HCG assay because: 1) there was substantial cross reactivity of the LH standards in the commercial assay, 2) the serum LH was elevated in four of six men with solitary testes, 3) there was a striking correlation between elevated serum LH levels and falsely elevated commercial HCG levels in ten men with solitary or absent testes, and 4) there were no falsely positive HCG determinations in 13 normal men but there were falsely positive HCG determinations in seven of ten anorchid men.« less
Johnson, Susan L; Tabaei, Bahman P; Herman, William H
2005-02-01
To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.
Breast cancer detection risk in screening mammography after a false-positive result.
Castells, X; Román, M; Romero, A; Blanch, J; Zubizarreta, R; Ascunce, N; Salas, D; Burón, A; Sala, M
2013-02-01
False-positives are a major concern in breast cancer screening. However, false-positives have been little evaluated as a prognostic factor for cancer detection. Our aim was to evaluate the association of false-positive results with the cancer detection risk in subsequent screening participations over a 17-year period. This is a retrospective cohort study of 762,506 women aged 45-69 years, with at least two screening participations, who underwent 2,594,146 screening mammograms from 1990 to 2006. Multilevel discrete-time hazard models were used to estimate the adjusted odds ratios (OR) of breast cancer detection in subsequent screening participations in women with false-positive results. False-positives involving a fine-needle aspiration cytology or a biopsy had a higher cancer detection risk than those involving additional imaging procedures alone (OR = 2.69; 95%CI: 2.28-3.16 and OR = 1.81; 95%CI: 1.70-1.94, respectively). The risk of cancer detection increased substantially if women with cytology or biopsy had a familial history of breast cancer (OR = 4.64; 95%CI: 3.23-6.66). Other factors associated with an increased cancer detection risk were age 65-69 years (OR = 1.84; 95%CI: 1.67-2.03), non-attendance at the previous screening invitation (OR = 1.26; 95%CI: 1.11-1.43), and having undergone a previous benign biopsy outside the screening program (OR = 1.24; 95%CI: 1.13-1.35). Women with a false-positive test have an increased risk of cancer detection in subsequent screening participations, especially those with a false-positive result involving cytology or biopsy. Understanding the factors behind this association could provide valuable information to increase the effectiveness of breast cancer screening. Copyright © 2012 Elsevier Ltd. All rights reserved.
Psychological distress in U.S. women who have experienced false-positive mammograms.
Jatoi, Ismail; Zhu, Kangmin; Shah, Mona; Lawrence, William
2006-11-01
In the United States, approximately 10.7% of all screening mammograms lead to a false-positive result, but the overall impact of false-positives on psychological well-being is poorly understood. Data were analyzed from the 2000 U.S. National Health Interview Survey (NHIS), the most recent national survey that included a cancer control module. Study subjects were 9,755 women who ever had a mammogram, of which 1,450 had experienced a false-positive result. Psychological distress was assessed using the validated K6 questionnaire and logistic regression was used to discern any association with previous false-positive mammograms. In a multivariate analysis, women who had indicated a previous false-positive mammogram were more likely to report feeling sad (OR = 1.18, 95% CI, 1.03-1.35), restless (OR = 1.23, 95% CI, 1.08-1.40), worthless (OR = 1.27, 95% CI, 1.04-1.54), and finding that everything was an effort (OR = 1.27, 95% CI, 1.10-1.47). These women were also more likely to have seen a mental health professional in the 12 months preceding the survey (OR = 1.28, 95% CI, 1.03-1.58) and had a higher composite score on all items of the K6 scale (P < 0.0001), a reflection of increased psychological distress. Analyses by age and race revealed that, among women who had experienced false-positives, younger women were more likely to feel that everything was an effort, and blacks were more likely to feel restless. In a random sampling of the U.S. population, women who had previously experienced false-positive mammograms were more likely to report symptoms of anxiety and depression.
Horlbeck, Max A; Gilbert, Luke A; Villalta, Jacqueline E; Adamson, Britt; Pak, Ryan A; Chen, Yuwen; Fields, Alexander P; Park, Chong Yon; Corn, Jacob E; Kampmann, Martin; Weissman, Jonathan S
2016-01-01
We recently found that nucleosomes directly block access of CRISPR/Cas9 to DNA (Horlbeck et al., 2016). Here, we build on this observation with a comprehensive algorithm that incorporates chromatin, position, and sequence features to accurately predict highly effective single guide RNAs (sgRNAs) for targeting nuclease-dead Cas9-mediated transcriptional repression (CRISPRi) and activation (CRISPRa). We use this algorithm to design next-generation genome-scale CRISPRi and CRISPRa libraries targeting human and mouse genomes. A CRISPRi screen for essential genes in K562 cells demonstrates that the large majority of sgRNAs are highly active. We also find CRISPRi does not exhibit any detectable non-specific toxicity recently observed with CRISPR nuclease approaches. Precision-recall analysis shows that we detect over 90% of essential genes with minimal false positives using a compact 5 sgRNA/gene library. Our results establish CRISPRi and CRISPRa as premier tools for loss- or gain-of-function studies and provide a general strategy for identifying Cas9 target sites. DOI: http://dx.doi.org/10.7554/eLife.19760.001 PMID:27661255
NASA Astrophysics Data System (ADS)
Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.
2016-05-01
We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.
Mandelker, Diana; Schmidt, Ryan J; Ankala, Arunkanth; McDonald Gibson, Kristin; Bowser, Mark; Sharma, Himanshu; Duffy, Elizabeth; Hegde, Madhuri; Santani, Avni; Lebo, Matthew; Funke, Birgit
2016-12-01
Next-generation sequencing (NGS) is now routinely used to interrogate large sets of genes in a diagnostic setting. Regions of high sequence homology continue to be a major challenge for short-read technologies and can lead to false-positive and false-negative diagnostic errors. At the scale of whole-exome sequencing (WES), laboratories may be limited in their knowledge of genes and regions that pose technical hurdles due to high homology. We have created an exome-wide resource that catalogs highly homologous regions that is tailored toward diagnostic applications. This resource was developed using a mappability-based approach tailored to current Sanger and NGS protocols. Gene-level and exon-level lists delineate regions that are difficult or impossible to analyze via standard NGS. These regions are ranked by degree of affectedness, annotated for medical relevance, and classified by the type of homology (within-gene, different functional gene, known pseudogene, uncharacterized noncoding region). Additionally, we provide a list of exons that cannot be analyzed by short-amplicon Sanger sequencing. This resource can help guide clinical test design, supplemental assay implementation, and results interpretation in the context of high homology.Genet Med 18 12, 1282-1289.
How to limit false positives in environmental DNA and metabarcoding?
Ficetola, Gentile Francesco; Taberlet, Pierre; Coissac, Eric
2016-05-01
Environmental DNA (eDNA) and metabarcoding are boosting our ability to acquire data on species distribution in a variety of ecosystems. Nevertheless, as most of sampling approaches, eDNA is not perfect. It can fail to detect species that are actually present, and even false positives are possible: a species may be apparently detected in areas where it is actually absent. Controlling false positives remains a main challenge for eDNA analyses: in this issue of Molecular Ecology Resources, Lahoz-Monfort et al. () test the performance of multiple statistical modelling approaches to estimate the rate of detection and false positives from eDNA data. Here, we discuss the importance of controlling for false detection from early steps of eDNA analyses (laboratory, bioinformatics), to improve the quality of results and allow an efficient use of the site occupancy-detection modelling (SODM) framework for limiting false presences in eDNA analysis. © 2016 John Wiley & Sons Ltd.
Shi, Zhenghao; Ma, Jiejue; Feng, Yaning; He, Lifeng; Suzuki, Kenji
2015-11-01
MTANN (Massive Training Artificial Neural Network) is a promising tool, which applied to eliminate false-positive for thoracic CT in recent years. In order to evaluate whether this method is feasible to eliminate false-positive of different CAD schemes, especially, when it is applied to commercial CAD software, this paper evaluate the performance of the method for eliminating false-positives produced by three different versions of commercial CAD software for lung nodules detection in chest radiographs. Experimental results demonstrate that the approach is useful in reducing FPs for different computer aided lung nodules detection software in chest radiographs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartholomew, Rachel A.; Ozanich, Richard M.; Arce, Jennifer S.
2017-02-01
The goal of this testing was to evaluate the ability of currently available commercial off-the-shelf (COTS) biological indicator tests and immunoassays to detect Bacillus anthracis (Ba) spores and ricin. In general, immunoassays provide more specific identification of biological threats as compared to indicator tests [3]. Many of these detection products are widely used by first responders and other end users. In most cases, performance data for these instruments are supplied directly from the manufacturer, but have not been verified by an external, independent assessment [1]. Our test plan modules included assessments of inclusivity (ability to generate true positive results), commonlymore » encountered hoax powders (which can cause potential interferences or false positives), and estimation of limit of detection (LOD) (sensitivity) testing.« less
Pina, Géraldine; Dubois, Séverine; Murat, Arnaud; Berger, Nicole; Niccoli, Patricia; Peix, Jean-Louis; Cohen, Régis; Guillausseau, Claudine; Charrie, Anne; Chabre, Olivier; Cornu, Catherine; Borson-Chazot, Françoise; Rohmer, Vincent
2013-03-01
To evaluate a second-generation assay for basal serum calcitonin (CT) measurements compared with the pentagastrin-stimulation test for the diagnosis of inherited medullary thyroid carcinoma (MTC) and the follow-up of patients with MTC after surgery. Recent American Thyroid Association recommendations suggest the use of basal CT alone to diagnose and assess follow-up of MTC as the pentagastrin (Pg) test is unavailable in many countries. Multicentric prospective study. A total of 162 patients with basal CT <10 ng/l were included: 54 asymptomatic patients harboured noncysteine 'rearranged during transfection' (RET) proto-oncogene mutations and 108 patients had entered follow-up of MTC after surgery. All patients underwent basal and Pg-stimulated CT measurements using a second-generation assay with 5-ng/l functional sensitivity. Ninety-five per cent of patients with basal CT ≥ 5 ng/l and 25% of patients with basal CT <5 ng/l had a positive Pg-stimulation test (Pg CT >10 ng/l). Compared with the reference Pg test, basal CT ≥ 5 ng/l had 99% specificity, a 95%-positive predictive value but only 35% sensitivity (P < 0.0001). Overall, there were 31% less false-negative results using a 5-ng/l threshold for basal CT instead of the previously used 10-ng/l threshold. The ultrasensitive CT assay reduces the false-negative rate of basal CT measurements when diagnosing familial MTC and in postoperative follow-up compared with previously used assays. However, its sensitivity to detect C-cell disease remains lower than that of the Pg-stimulation test. © 2012 Blackwell Publishing Ltd.
Efficient and stable transformation of hop (Humulus lupulus L.) var. Eroica by particle bombardment.
Batista, Dora; Fonseca, Sandra; Serrazina, Susana; Figueiredo, Andreia; Pais, Maria Salomé
2008-07-01
To the best of our knowledge, this is the first accurate and reliable protocol for hop (Humulus lupulus L.) genetic transformation using particle bombardment. Based on the highly productive regeneration system previously developed by us for hop var. Eroica, two efficient transformation protocols were established using petioles and green organogenic nodular clusters (GONCs) bombarded with gusA reporter and hpt selectable genes. A total of 36 hygromycin B-resistant (hyg(r)) plants obtained upon continuous selection were successfully transferred to the greenhouse, and a first generation group of transplanted plants was followed after spending a complete vegetative cycle. PCR analysis showed the presence of one of both transgenes in 25 plants, corresponding to an integration frequency of 69.4% and an overall transformation efficiency of 7.5%. Although all final transformants were GUS negative, the integration frequency of gusA gene was higher than that of hpt gene. Petiole-derived transgenic plants showed a higher co-integration rate of 76.9%. Real-time PCR analysis confirmed co-integration in 86% of the plants tested and its stability until the first generation, and identified positive plants amongst those previously assessed as hpt (+) only by conventional PCR. Our results suggest that the integration frequencies presented here, as well as those of others, may have been underestimated, and that PCR results should be taken with precaution not only for false positives, but also for false negatives. The protocols here described could be very useful for future introduction of metabolic or resistance traits in hop cultivars even if slight modifications for other genotypes are needed.
NASA Astrophysics Data System (ADS)
Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako
2007-03-01
The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.
Goodrich, David; Tao, Xin; Bohrer, Chelsea; Lonczak, Agnieszka; Xing, Tongji; Zimmerman, Rebekah; Zhan, Yiping; Scott, Richard T; Treff, Nathan R
2016-11-01
A subset of preimplantation stage embryos may possess mosaicism of chromosomal constitution, representing a possible limitation to the clinical predictive value of comprehensive chromosome screening (CCS) from a single biopsy. However, contemporary methods of CCS may be capable of predicting mosaicism in the blastocyst by detecting intermediate levels of aneuploidy within a trophectoderm biopsy. This study evaluates the sensitivity and specificity of aneuploidy detection by two CCS platforms using a cell line mixture model of a mosaic trophectoderm biopsy. Four cell lines with known karyotypes were obtained and mixed together at specific ratios of six total cells (0:6, 1:5, 2:4, 3:3, 4:2, 5:1, and 6:0). A female euploid and a male trisomy 18 cell line were used for one set, and a male trisomy 13 and a male trisomy 15 cell line were used for another. Replicates of each mixture were prepared, randomized, and blinded for analysis by one of two CCS platforms (quantitative polymerase chain reaction (qPCR) or VeriSeq next-generation sequencing (NGS)). Sensitivity and specificity of aneuploidy detection at each level of mosaicism was determined and compared between platforms. With the default settings for each platform, the sensitivity of qPCR and NGS were not statistically different, and 100 % specificity was observed (no false positives) at all levels of mosaicism. However, the use of previously published custom criteria for NGS increased sensitivity but also significantly decreased specificity (33 % false-positive prediction of aneuploidy). By demonstrating increased false-positive diagnoses when reducing the stringency of predicting an abnormality, these data illustrate the importance of preclinical evaluation of new testing paradigms before clinical implementation.
Steiner, J M; Rehfeld, J F; Pantchev, N
2010-01-01
An assay for the measurement of pancreatic elastase in dog feces has been introduced. The goal of this study was to evaluate the rate of false-positive fecal-elastase test results in dogs with suspected exocrine pancreatic insufficiency (EPI) and to assess serum cholecystokinin (CCK) concentrations in dogs with a false positive fecal elastase test result. Twenty-six fecal and serum samples from dogs suspected of EPI, for which samples had been submitted to a commercial laboratory (Vet Med Labor) for analysis. Prospective study. Serum trypsin-like immunoreactivity (TLI) was measured in 26 dogs with a decreased fecal elastase concentration of <10 microg/g feces. Serum CCK concentrations were measured in 21 of these dogs. Of 26 dogs with a decreased fecal elastase concentration, 6 (23%) had serum TLI concentrations within or above the reference range. Serum CCK concentrations were significantly higher in dogs with a true positive fecal elastase test result (median: 1.1 pmol/L; range: 0.1-3.3 pmol/L) than in those with a false positive fecal elastase test result (median: 0.1 pmol/L; range: 0.1-0.9 pmol/L; P value = .0163). The rate of false positive fecal elastase test results was high in this group of dogs, suggesting that diagnosis of EPI must be confirmed by other means. The decreased CCK concentration in dogs with a false positive fecal elastase test result could suggest that false positive results are because of decreased stimulation of exocrine pancreatic function caused by other conditions.
Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.
Jensen, P S; Watanabe, H
1999-02-01
To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.
Jia, Qiang; Meng, Zhaowei; Tan, Jian; Zhang, Guizhi; He, Yajing; Sun, Haoran; Yu, Chunshui; Li, Dong; Zheng, Wei; Wang, Renfei; Wang, Shen; Li, Xue; Zhang, Jianping; Hu, Tianpeng; Liu, N A; Upadhyaya, Arun
2015-11-01
Iodine-131 (I-131) therapy and post-therapy I-131 scanning are essential in the management of differentiated thyroid cancer (DTC). However, pathological false positive I-131 scans can lead to misdiagnosis and inappropriate I-131 treatment. This retrospective study aimed to investigate the best imaging modality for the diagnosis of pathological false positive I-131 scans in a DTC patient cohort, and to determine its incidence. DTC patient data archived from January 2008 to January 2010 was retrieved. Post-therapeutic I-131 scans were conducted and interpreted. The imaging modalities of magnetic resonance imaging (MRI), computed tomography and ultrasonography were applied and compared to check all suspected lesions. Biopsy or needle aspiration was conducted for patients who consented to the acquisition of histopathological confirmation. Data for 156 DTC patients were retrieved. Only 6 cases of pathological false-positives were found among these (incidence, 3.85%), which included 3 cases of thymic hyperplasia in the mediastinum, 1 case of pleomorphic adenoma in the parapharyngeal space and 1 case of thyroglossal duct cyst in the neck. MRI was demonstrated as the best imaging modality for diagnosis due to its superior soft tissue resolution. However, no imaging modality was able to identify the abdominal false positive-lesions observed in 2 cases, one of whom also had thymic hyperplasia. In conclusion, pathological false positive I-131 scans occurred with an incidence of 3.85%. MRI was the best imaging modality for diagnosing these pathological false-positives.
Tan, Alai; Freeman, Daniel H; Goodwin, James S; Freeman, Jean L
2006-12-01
The accuracy of mammography reading varies among radiologists. We conducted a population-based assessment on radiologist variation in false- positive rates of screening mammography and its associated radiologist characteristics. About 27,394 screening mammograms interpreted by 1067 radiologists were identified from a 5% non-cancer sample of Medicare claims during 1998-1999. The data were linked to the American Medical Association Masterfile to obtain radiologist characteristics. Multilevel logistic regression models were used to examine the radiologist variation in false-positive rates of screening mammography and the associated radiologist characteristics. Radiologists varied substantially in the false-positive rates of screening mammography (ranging from 1.5 to 24.1%, adjusting for patient characteristics). A longer time period since graduation is associated with lower false-positive rates (odds ratio [OR] for every 10 years increase: 0.87, 95% Confidence Interval [CI], 0.81-0.94) and female radiologists had higher false-positive rates than male radiologists (OR = 1.25, 95% CI, 1.05-1.49), adjusting for patient and other radiologist characteristics. The unmeasured factors contributed to about 90% of the between-radiologist variance. Radiologists varied greatly in accuracy of mammography reading. Female and more recently trained radiologists had higher false-positive rates. The variation among radiologists was largely due to unmeasured factors, especially unmeasured radiologist factors. If our results are confirmed in further studies, they suggest that system-level interventions would be required to reduce variation in mammography interpretation.
Is it time to sound an alarm about false-positive cell-free DNA testing for fetal aneuploidy?
Mennuti, Michael T; Cherry, Athena M; Morrissette, Jennifer J D; Dugoff, Lorraine
2013-11-01
Testing cell-free DNA (cfDNA) in maternal blood samples has been shown to have very high sensitivity for the detection of fetal aneuploidy with very low false-positive results in high-risk patients who undergo invasive prenatal diagnosis. Recent observation in clinical practice of several cases of positive cfDNA tests for trisomy 18 and trisomy 13, which were not confirmed by cytogenetic testing of the pregnancy, may reflect a limitation of the positive predictive value of this quantitative testing, particularly when it is used to detect rare aneuploidies. Analysis of a larger number of false-positive cases is needed to evaluate whether these observations reflect the positive predictive value that should be expected. Infrequently, mechanisms (such as low percentage mosaicism or confined placental mosaicism) might also lead to positive cfDNA testing that is not concordant with standard prenatal cytogenetic diagnosis. The need to explore these and other possible causes of false-positive cfDNA testing is exemplified by 2 of these cases. Additional evaluation of cfDNA testing in clinical practice and a mechanism for the systematic reporting of false-positive and false-negative cases will be important before this test is offered widely to the general population of low-risk obstetric patients. In the meantime, incorporating information about the positive predictive value in pretest counseling and in clinical laboratory reports is recommended. These experiences reinforce the importance of offering invasive testing to confirm cfDNA results before parental decision-making. Copyright © 2013 Mosby, Inc. All rights reserved.
Imberger, Georgina; Thorlund, Kristian; Gluud, Christian; Wetterslev, Jørn
2016-08-12
Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Comprehensive benchmarking and ensemble approaches for metagenomic classifiers.
McIntyre, Alexa B R; Ounit, Rachid; Afshinnekoo, Ebrahim; Prill, Robert J; Hénaff, Elizabeth; Alexander, Noah; Minot, Samuel S; Danko, David; Foox, Jonathan; Ahsanuddin, Sofia; Tighe, Scott; Hasan, Nur A; Subramanian, Poorani; Moffat, Kelly; Levy, Shawn; Lonardi, Stefano; Greenfield, Nick; Colwell, Rita R; Rosen, Gail L; Mason, Christopher E
2017-09-21
One of the main challenges in metagenomics is the identification of microorganisms in clinical and environmental samples. While an extensive and heterogeneous set of computational tools is available to classify microorganisms using whole-genome shotgun sequencing data, comprehensive comparisons of these methods are limited. In this study, we use the largest-to-date set of laboratory-generated and simulated controls across 846 species to evaluate the performance of 11 metagenomic classifiers. Tools were characterized on the basis of their ability to identify taxa at the genus, species, and strain levels, quantify relative abundances of taxa, and classify individual reads to the species level. Strikingly, the number of species identified by the 11 tools can differ by over three orders of magnitude on the same datasets. Various strategies can ameliorate taxonomic misclassification, including abundance filtering, ensemble approaches, and tool intersection. Nevertheless, these strategies were often insufficient to completely eliminate false positives from environmental samples, which are especially important where they concern medically relevant species. Overall, pairing tools with different classification strategies (k-mer, alignment, marker) can combine their respective advantages. This study provides positive and negative controls, titrated standards, and a guide for selecting tools for metagenomic analyses by comparing ranges of precision, accuracy, and recall. We show that proper experimental design and analysis parameters can reduce false positives, provide greater resolution of species in complex metagenomic samples, and improve the interpretation of results.
Reduction of lymph tissue false positives in pulmonary embolism detection
NASA Astrophysics Data System (ADS)
Ghanem, Bernard; Liang, Jianming; Bi, Jinbo; Salganicoff, Marcos; Krishnan, Arun
2008-03-01
Pulmonary embolism (PE) is a serious medical condition, characterized by the partial/complete blockage of an artery within the lungs. We have previously developed a fast yet effective approach for computer aided detection of PE in computed topographic pulmonary angiography (CTPA),1 which is capable of detecting both acute and chronic PEs, achieving a benchmark performance of 78% sensitivity at 4 false positives (FPs) per volume. By reviewing the FPs generated by this system, we found the most dominant type of FP, roughly one third of all FPs, to be lymph/connective tissue. In this paper, we propose a novel approach that specifically aims at reducing this FP type. Our idea is to explicitly exploit the anatomical context configuration of PE and lymph tissue in the lungs: a lymph FP connects to the airway and is located outside the artery, while a true PE should not connect to the airway and must be inside the artery. To realize this idea, given a detected candidate (i.e. a cluster of suspicious voxels), we compute a set of contextual features, including its distance to the airway based on local distance transform and its relative position to the artery based on fast tensor voting and Hessian "vesselness" scores. Our tests on unseen cases show that these features can reduce the lymph FPs by 59%, while improving the overall sensitivity by 3.4%.
Liu, Bo; Cheng, H D; Huang, Jianhua; Tian, Jiawei; Liu, Jiafeng; Tang, Xianglong
2009-08-01
Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically.
Working memory affects false memory production for emotional events.
Mirandola, Chiara; Toffalini, Enrico; Ciriello, Alfonso; Cornoldi, Cesare
2017-01-01
Whereas a link between working memory (WM) and memory distortions has been demonstrated, its influence on emotional false memories is unclear. In two experiments, a verbal WM task and a false memory paradigm for negative, positive or neutral events were employed. In Experiment 1, we investigated individual differences in verbal WM and found that the interaction between valence and WM predicted false recognition, with negative and positive material protecting high WM individuals against false remembering; the beneficial effect of negative material disappeared in low WM participants. In Experiment 2, we lowered the WM capacity of half of the participants with a double task request, which led to an overall increase in false memories; furthermore, consistent with Experiment 1, the increase in negative false memories was larger than that of neutral or positive ones. It is concluded that WM plays a critical role in determining false memory production, specifically influencing the processing of negative material.
Slapa, Rafal Z.; Piwowonski, Antoni; Jakubowski, Wieslaw S.; Bierca, Jacek; Szopinski, Kazimierz T.; Slowinska-Srzednicka, Jadwiga; Migda, Bartosz; Mlosek, R. Krzysztof
2012-01-01
Although elastography can enhance the differential diagnosis of thyroid nodules, its diagnostic performance is not ideal at present. Further improvements in the technique and creation of robust diagnostic criteria are necessary. The purpose of this study was to compare the usefulness of strain elastography and a new generation of elasticity imaging called supersonic shear wave elastography (SSWE) in differential evaluation of thyroid nodules. Six thyroid nodules in 4 patients were studied. SSWE yielded 1 true-positive and 5 true-negative results. Strain elastography yielded 5 false-positive results and 1 false-negative result. A novel finding appreciated with SSWE, were punctate foci of increased stiffness corresponding to microcalcifications in 4 nodules, some not visible on B-mode ultrasound, as opposed to soft, colloid-inspissated areas visible on B-mode ultrasound in 2 nodules. This preliminary paper indicates that SSWE may outperform strain elastography in differentiation of thyroid nodules with regard to their stiffness. SSWE showed the possibility of differentiation of high echogenic foci into microcalcifications and inspissated colloid, adding a new dimension to thyroid elastography. Further multicenter large-scale studies of thyroid nodules evaluating different elastographic methods are warranted. PMID:22685685
Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira
2012-05-01
Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.
Chen, Zhangguo; Gowan, Katherine; Leach, Sonia M; Viboolsittiseri, Sawanee S; Mishra, Ameet K; Kadoishi, Tanya; Diener, Katrina; Gao, Bifeng; Jones, Kenneth; Wang, Jing H
2016-10-21
Whole genome next generation sequencing (NGS) is increasingly employed to detect genomic rearrangements in cancer genomes, especially in lymphoid malignancies. We recently established a unique mouse model by specifically deleting a key non-homologous end-joining DNA repair gene, Xrcc4, and a cell cycle checkpoint gene, Trp53, in germinal center B cells. This mouse model spontaneously develops mature B cell lymphomas (termed G1XP lymphomas). Here, we attempt to employ whole genome NGS to identify novel structural rearrangements, in particular inter-chromosomal translocations (CTXs), in these G1XP lymphomas. We sequenced six lymphoma samples, aligned our NGS data with mouse reference genome (in C57BL/6J (B6) background) and identified CTXs using CREST algorithm. Surprisingly, we detected widespread CTXs in both lymphomas and wildtype control samples, majority of which were false positive and attributable to different genetic backgrounds. In addition, we validated our NGS pipeline by sequencing multiple control samples from distinct tissues of different genetic backgrounds of mouse (B6 vs non-B6). Lastly, our studies showed that widespread false positive CTXs can be generated by simply aligning sequences from different genetic backgrounds of mouse. We conclude that mapping and alignment with reference genome might not be a preferred method for analyzing whole-genome NGS data obtained from a genetic background different from reference genome. Given the complex genetic background of different mouse strains or the heterogeneity of cancer genomes in human patients, in order to minimize such systematic artifacts and uncover novel CTXs, a preferred method might be de novo assembly of personalized normal control genome and cancer cell genome, instead of mapping and aligning NGS data to mouse or human reference genome. Thus, our studies have critical impact on the manner of data analysis for cancer genomics.
Sandes, V S; Silva, S G C; Motta, I J F; Velarde, L G C; de Castilho, S R
2017-06-01
We propose to analyse the positive and false-positive results of treponemal and nontreponemal tests in blood donors from Brazil and to evaluate possible factors associated with the results of treponemal tests. Treponemal tests have been used widely for syphilis screening in blood banks. The introduction of these tests in donor screening has caused an impact and a loss of donors who need to be assessed. This was a retrospective cross-sectional study of syphilis screening and confirmatory test results of blood donors that were obtained before and after adopting a chemiluminescent immunoassay (CLIA). A comparative analysis was performed using a second sample drawn from positive donors. The possible factors associated with CLIA-positive or CLIA-false-positive results were investigated in a subgroup. Statistical tests were used to compare the proportions and adjusted estimates of association. The reactivity rate increased from 1·01% (N = 28 158) to 2·66% (N = 25 577) after introducing the new test. Among Venereal Disease Research Laboratory (VDRL)- and CLIA-confirmed results, the false-positive rates were 40·5% (N = 180) and 37·4% (N = 359), respectively (P = 0·5266). Older donors (OR = 1·04; P = 0·0010) and donors with lower education levels (OR = 6·59; P = 0·0029) were associated with a higher risk of positivity for syphilis. CLIA represents an improvement in blood bank serological screening. However, its use in a healthy population appears to result in high rates of false positives. Identifying which characteristics can predict false positives, however, remains a challenge. © 2017 British Blood Transfusion Society.
False-positive cryptococcal antigen latex agglutination caused by disinfectants and soaps.
Blevins, L B; Fenn, J; Segal, H; Newcomb-Gayman, P; Carroll, K C
1995-01-01
Five disinfectants or soaps were tested to determine if any could be responsible for false-positive results obtained with the Latex-Crypto Antigen Detection System kit (Immuno-Mycologics, Inc., Norman, Okla.). Three disinfectants or soaps (Derma soap, 7X, and Bacdown) produced false-positive agglutination after repeated washing of ring slides during testing of a known negative cerebrospinal fluid specimen. PMID:7650214
Lourenço, Felipe Rebello; Botelho, Túlia De Souza; Pinto, Terezinha De Jesus Andreoli
2012-01-01
The limulus amebocyte lysate (LAL) test is the simplest and most widely used procedure for detection of endotoxin in parenteral drugs. The LAL test demands optimal pH, ionic strength, temperature, and time of incubation. Slight changes in these parameters may increase the frequency of false-positive responses and the estimated uncertainty of the LAL test. The aim of this paper is to evaluate how changes in the pH, temperature, and time of incubation affect the occurrence of false-positive responses in the LAL test. LAL tests were performed in nominal conditions (37 °C, 60 min, and pH 7) and in different conditions of temperature (36 °C and 38 °C), time of incubation (58 and 62 min), and pH (6 and 8). Slight differences in pH increase the frequency of false-positive responses 5-fold (relative risk 5.0), resulting in an estimated of uncertainty 7.6%. Temperature and time of incubation affect the LAL test less, showing relative risks of 1.5 and 1.0, respectively. Estimated uncertainties in 36 °C or 38 °C temperatures and 58 or 62 min of incubation were found to be 2.0% and 1.0%, respectively. Simultaneous differences in these parameters significantly increase the frequency of false-positive responses. The limulus amebocyte lysate (LAL) gel-clot test is a simple test for detection of endotoxin from Gram-negative bacteria. The test is based on a gel formation when a certain amount of endotoxin is present; it is a pass/fail test. The LAL test requires optimal pH, ionic strength, temperature, and time of incubation. Slight difference in these parameters may increase the frequency of false-positive responses. The aim of this paper is to evaluate how changes in the pH, temperature, and time of incubation affect the occurrence of false-positive responses in the LAL test. We find that slight differences in pH increase the frequency of false-positive responses 5-fold. Temperature and time of incubation affect the LAL test less. Simultaneous differences in these parameters significantly increase the frequency of false-positive responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mordang, Jan-Jurre, E-mail: Jan-Jurre.Mordang@radboudumc.nl; Gubern-Mérida, Albert; Karssemeijer, Nico
Purpose: In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists’ detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performancemore » of the CADe system in finding malignant microcalcifications. Methods: A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. Results: The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8–1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. Conclusions: By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.« less
Bone marrow cells stained by azide-conjugated Alexa fluors in the absence of an alkyne label.
Lin, Guiting; Ning, Hongxiu; Banie, Lia; Qiu, Xuefeng; Zhang, Haiyang; Lue, Tom F; Lin, Ching-Shwun
2012-09-01
Thymidine analog 5-ethynyl-2'-deoxyuridine (EdU) has recently been introduced as an alternative to 5-bromo-2-deoxyuridine (BrdU) for cell labeling and tracking. Incorporation of EdU into replicating DNA can be detected by azide-conjugated fluors (eg, Alexa-azide) through a Cu(i)-catalyzed click reaction between EdU's alkyne moiety and azide. While this cell labeling method has proven to be valuable for tracking transplanted stem cells in various tissues, we have found that some bone marrow cells could be stained by Alexa-azide in the absence of EdU label. In intact rat femoral bone marrow, ~3% of nucleated cells were false-positively stained, and in isolated bone marrow cells, ~13%. In contrast to true-positive stains, which localize in the nucleus, the false-positive stains were cytoplasmic. Furthermore, while true-positive staining requires Cu(i), false-positive staining does not. Reducing the click reaction time or reducing the Alexa-azide concentration failed to improve the distinction between true- and false-positive staining. Hematopoietic and mesenchymal stem cell markers CD34 and Stro-1 did not co-localize with the false-positively stained cells, and these cells' identity remains unknown.
Virtual Tool Mark Generation for Efficient Striation Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekstrand, Laura; Zhang, Song; Grieve, Taylor
2014-02-16
This study introduces a tool mark analysis approach based upon 3D scans of screwdriver tip and marked plate surfaces at the micrometer scale from an optical microscope. An open-source 3D graphics software package is utilized to simulate the marking process as the projection of the tip's geometry in the direction of tool travel. The edge of this projection becomes a virtual tool mark that is compared to cross-sections of the marked plate geometry using the statistical likelihood algorithm introduced by Chumbley et al. In a study with both sides of six screwdriver tips and 34 corresponding marks, the method distinguishedmore » known matches from known nonmatches with zero false-positive matches and two false-negative matches. For matches, it could predict the correct marking angle within ±5–10°. Individual comparisons could be made in seconds on a desktop computer, suggesting that the method could save time for examiners.« less
Container weld identification using portable laser scanners
NASA Astrophysics Data System (ADS)
Taddei, Pierluigi; Boström, Gunnar; Puig, David; Kravtchenko, Victor; Sequeira, Vítor
2015-03-01
Identification and integrity verification of sealed containers for security applications can be obtained by employing noninvasive portable optical systems. We present a portable laser range imaging system capable of identifying welds, a byproduct of a container's physical sealing, with micrometer accuracy. It is based on the assumption that each weld has a unique three-dimensional (3-D) structure which cannot be copied or forged. We process the 3-D surface to generate a normalized depth map which is invariant to mechanical alignment errors and that is used to build compact signatures representing the weld. A weld is identified by performing cross correlations of its signature against a set of known signatures. The system has been tested on realistic datasets, containing hundreds of welds, yielding no false positives or false negatives and thus showing the robustness of the system and the validity of the chosen signature.
Identifying QT prolongation from ECG impressions using a general-purpose Natural Language Processor
Denny, Joshua C.; Miller, Randolph A.; Waitman, Lemuel Russell; Arrieta, Mark; Peterson, Joshua F.
2009-01-01
Objective Typically detected via electrocardiograms (ECGs), QT interval prolongation is a known risk factor for sudden cardiac death. Since medications can promote or exacerbate the condition, detection of QT interval prolongation is important for clinical decision support. We investigated the accuracy of natural language processing (NLP) for identifying QT prolongation from cardiologist-generated, free-text ECG impressions compared to corrected QT (QTc) thresholds reported by ECG machines. Methods After integrating negation detection to a locally-developed natural language processor, the KnowledgeMap concept identifier, we evaluated NLP-based detection of QT prolongation compared to the calculated QTc on a set of 44,318 ECGs obtained from hospitalized patients. We also created a string query using regular expressions to identify QT prolongation. We calculated sensitivity and specificity of the methods using manual physician review of the cardiologist-generated reports as the gold standard. To investigate causes of “false positive” calculated QTc, we manually reviewed randomly selected ECGs with a long calculated QTc but no mention of QT prolongation. Separately, we validated the performance of the negation detection algorithm on 5,000 manually-categorized ECG phrases for any medical concept (not limited to QT prolongation) prior to developing the NLP query for QT prolongation. Results The NLP query for QT prolongation correctly identified 2,364 of 2,373 ECGs with QT prolongation with a sensitivity of 0.996 and a positive predictive value of 1.000. There were no false positives. The regular expression query had a sensitivity of 0.999 and positive predictive value of 0.982. In contrast, the positive predictive value of common QTc thresholds derived from ECG machines was 0.07–0.25 with corresponding sensitivities of 0.994–0.046. The negation detection algorithm had a recall of 0.973 and precision of 0.982 for 10,490 concepts found within ECG impressions. Conclusions NLP and regular expression queries of cardiologists’ ECG interpretations can more effectively identify QT prolongation than the automated QTc intervals reported by ECG machines. Future clinical decision support could employ NLP queries to detect QTc prolongation and other reported ECG abnormalities. PMID:18938105
The Illusion of the Positive: The impact of natural and induced mood on older adults’ false recall
Emery, Lisa; Hess, Thomas M.; Elliot, Tonya
2012-01-01
Recent research suggests that affective and motivational processes can influence age differences in memory. In the current study, we examine the impact of both natural and induced mood state on age differences in false recall. Older and younger adults performed a version of the Deese-Roediger-McDermott (DRM; Roediger & McDermott, 1995) false memory paradigm in either their natural mood state or after a positive or negative mood induction. Results indicated that, after accounting for age differences in basic cognitive function, age-related differences in positive mood during the testing session were related to increased false recall in older adults. Inducing older adults into a positive mood also exacerbated age differences in false memory. In contrast, veridical recall did not appear to be systematically influenced by mood. Together, these results suggest that positive mood states can impact older adults’ information processing and potentially increase underlying cognitive age differences. PMID:22292431
The illusion of the positive: the impact of natural and induced mood on older adults' false recall.
Emery, Lisa; Hess, Thomas M; Elliot, Tonya
2012-11-01
Recent research suggests that affective and motivational processes can influence age differences in memory. In the current study, we examine the impact of both natural and induced mood state on age differences in false recall. Older and younger adults performed a version of the Deese-Roediger-McDermott (DRM; Roediger & McDermott, 1995 , Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 803) false memory paradigm in either their natural mood state or after a positive or negative mood induction. Results indicated that, after accounting for age differences in basic cognitive function, age-related differences in positive mood during the testing session were related to increased false recall in older adults. Inducing older adults into a positive mood also exacerbated age differences in false memory. In contrast, veridical recall did not appear to be systematically influenced by mood. Together, these results suggest that positive mood states can impact older adults' information processing and potentially increase underlying cognitive age differences.
Coffey, Christanne; Serra, John; Goebel, Mat; Espinoza, Sarah; Castillo, Edward; Dunford, James
2018-05-03
A significant increase in false positive ST-elevation myocardial infarction (STEMI) electrocardiogram interpretations was noted after replacement of all of the City of San Diego's 110 monitor-defibrillator units with a new brand. These concerns were brought to the manufacturer and a revised interpretive algorithm was implemented. This study evaluated the effects of a revised interpretation algorithm to identify STEMI when used by San Diego paramedics. Data were reviewed 6 months before and 6 months after the introduction of a revised interpretation algorithm. True-positive and false-positive interpretations were identified. Factors contributing to an incorrect interpretation were assessed and patient demographics were collected. A total of 372 (234 preimplementation, 138 postimplementation) cases met inclusion criteria. There was a significant reduction in false positive STEMI (150 preimplementation, 40 postimplementation; p < 0.001) after implementation. The most common factors resulting in false positive before implementation were right bundle branch block, left bundle branch block, and atrial fibrillation. The new algorithm corrected for these misinterpretations with most postimplementation false positives attributed to benign early repolarization and poor data quality. Subsequent follow-up at 10 months showed maintenance of the observed reduction in false positives. This study shows that introducing a revised 12-lead interpretive algorithm resulted in a significant reduction in the number of false positive STEMI electrocardiogram interpretations in a large urban emergency medical services system. Rigorous testing and standardization of new interpretative software is recommended before introduction into a clinical setting to prevent issues resulting from inappropriate cardiac catheterization laboratory activations. Copyright © 2018 Elsevier Inc. All rights reserved.
Bertoldi, Eduardo G; Stella, Steffen F; Rohde, Luis Eduardo P; Polanczyk, Carisi A
2017-05-04
The aim of this research is to evaluate the relative cost-effectiveness of functional and anatomical strategies for diagnosing stable coronary artery disease (CAD), using exercise (Ex)-ECG, stress echocardiogram (ECHO), single-photon emission CT (SPECT), coronary CT angiography (CTA) or stress cardiacmagnetic resonance (C-MRI). Decision-analytical model, comparing strategies of sequential tests for evaluating patients with possible stable angina in low, intermediate and high pretest probability of CAD, from the perspective of a developing nation's public healthcare system. Hypothetical cohort of patients with pretest probability of CAD between 20% and 70%. The primary outcome is cost per correct diagnosis of CAD. Proportion of false-positive or false-negative tests and number of unnecessary tests performed were also evaluated. Strategies using Ex-ECG as initial test were the least costly alternatives but generated more frequent false-positive initial tests and false-negative final diagnosis. Strategies based on CTA or ECHO as initial test were the most attractive and resulted in similar cost-effectiveness ratios (I$ 286 and I$ 305 per correct diagnosis, respectively). A strategy based on C-MRI was highly effective for diagnosing stable CAD, but its high cost resulted in unfavourable incremental cost-effectiveness (ICER) in moderate-risk and high-risk scenarios. Non-invasive strategies based on SPECT have been dominated. An anatomical diagnostic strategy based on CTA is a cost-effective option for CAD diagnosis. Functional strategies performed equally well when based on ECHO. C-MRI yielded acceptable ICER only at low pretest probability, and SPECT was not cost-effective in our analysis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hofvind, Solveig; Sagstad, Silje; Sebuødegård, Sofie; Chen, Ying; Roman, Marta; Lee, Christoph I
2018-04-01
Purpose To compare rates and tumor characteristics of interval breast cancers (IBCs) detected after a negative versus false-positive screening among women participating in the Norwegian Breast Cancer Screening Program. Materials and Methods The Cancer Registry Regulation approved this retrospective study. Information about 423 445 women aged 49-71 years who underwent 789 481 full-field digital mammographic screening examinations during 2004-2012 was extracted from the Cancer Registry of Norway. Rates and odds ratios of IBC among women with a negative (the reference group) versus a false-positive screening were estimated by using logistic regression models adjusted for age at diagnosis and county of residence. Results A total of 1302 IBCs were diagnosed after 789 481 screening examinations, of which 7.0% (91 of 1302) were detected among women with a false-positive screening as the most recent breast imaging examination before detection. By using negative screening as the reference, adjusted odds ratios of IBCs were 3.3 (95% confidence interval [CI]: 2.6, 4.2) and 2.8 (95% CI: 1.8, 4.4) for women with a false-positive screening without and with needle biopsy, respectively. Women with a previous negative screening had a significantly lower proportion of tumors that were 10 mm or less (14.3% [150 of 1049] vs 50.0% [seven of 14], respectively; P < .01) and grade I tumors (13.2% [147 of 1114] vs 42.9% [six of 14]; P < .01), but a higher proportion of cases with lymph nodes positive for cancer (40.9% [442 of 1080] vs 13.3% [two of 15], respectively; P = .03) compared with women with a previous false-positive screening with benign biopsy. A retrospective review of the screening mammographic examinations identified 42.9% (39 of 91) of the false-positive cases to be the same lesion as the IBC. Conclusion By using a negative screening as the reference, a false-positive screening examination increased the risk of an IBC three-fold. The tumor characteristics of IBC after a negative screening were less favorable compared with those detected after a previous false-positive screening. © RSNA, 2017 Online supplemental material is available for this article.
A model for anomaly classification in intrusion detection systems
NASA Astrophysics Data System (ADS)
Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.
2015-09-01
Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.
Designing occupancy studies when false-positive detections occur
Clement, Matthew
2016-01-01
1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
Tsuge, Mikio; Izumizaki, Masahiko; Kigawa, Kazuyoshi; Atsumi, Takashi; Homma, Ikuo
2012-12-01
We studied the influence of false proprioceptive information generated by arm vibration and false visual information provided by a mirror in which subjects saw a reflection of another arm on perception of arm position, in a forearm position-matching task in right-handed subjects (n = 17). The mirror was placed between left and right arms, and arranged so that the reflected left arm appeared to the subjects to be their unseen right (reference) arm. The felt position of the right arm, indicated with a paddle, was influenced by vision of the mirror image of the left arm. If the left arm appeared flexed in the mirror, subjects felt their right arm to be more flexed than it was. Conversely, if the left arm was extended, they felt their right arm to be more extended than it was. When reference elbow flexors were vibrated at 70-80 Hz, an illusion of extension of the vibrated arm was elicited. The illusion of a more flexed reference arm evoked by seeing a mirror image of the flexed left arm was reduced by vibration. However, the illusion of extension of the right arm evoked by seeing a mirror image of the extended left arm was increased by vibration. That is, when the mirror and vibration illusions were in the same direction, they reinforced each other. However, when they were in opposite directions, they tended to cancel one another. The present study shows the interaction between proprioceptive and visual information in perception of arm position.
De Carolis, S; Santucci, S; Botta, A; Garofalo, S; Martino, C; Perrelli, A; Salvi, S; Degennaro, Va; de Belvis, Ag; Ferrazzani, S; Scambia, G
2010-06-01
Our aims were to assess the frequency of false-positive IgM antibodies for cytomegalovirus in pregnant women with autoimmune diseases and in healthy women (controls) and to determine their relationship with pregnancy outcome. Data from 133 pregnancies in 118 patients with autoimmune diseases and from 222 pregnancies in 198 controls were assessed. When positive IgM for cytomegalovirus was detected, IgG avidity, cytomegalovirus isolation and polymerase chain reaction for CMV-DNA in maternal urine and amniotic fluid samples were performed in order to identify primary infection or false positivity. A statistically significantly higher rate of false-positive IgM was found in pregnancies with autoimmune diseases (16.5%) in comparison with controls (0.9%). A worse pregnancy outcome was observed among patients with autoimmune disease and false cytomegalovirus IgM in comparison with those without false positivity: earlier week of delivery (p = 0.017), lower neonatal birth weight (p = 0.0004) and neonatal birth weight percentile (p = 0.002), higher rate of intrauterine growth restriction (p = 0.02) and babies weighing less than 2000 g (p = 0.025) were encountered. The presence of false cytomegalovirus IgM in patients with autoimmune diseases could be used as a novel prognostic index of poor pregnancy outcome: it may reflect a non-specific activation of the immune system that could negatively affect pregnancy outcome. Lupus (2010) 19, 844-849.
Zhu, Li-Wei; Yang, Xue-Mei; Xu, Xiao-Qin; Xu, Jian; Lu, Huang-Jun; Yan, Li-Xing
2008-10-01
This study was aimed to analyze the results of false positive reaction in bacterial detection of blood samples with BacT/ALERT 3D system, to evaluate the specificity of this system, and to decrease the false positive reaction. Each reaction flasks in past five years were processed for bacteria isolation and identification. When the initial cultures were positive, the remaining samples and the corresponding units were recultured if still available. 11395 blood samples were detected. It is worthy of note that the incubator temperature should be stabilized, avoiding fluctuation; when the cultures were alarmed, the reaction flasks showed be kept some hours for further incubation so as to trace a sharply increasing signal to support the judgement of true bacterial growth. The results indicated that 122 samples (1.07%) wee positive at initial culture, out of them 107 samples (88.7%) were found bacterial, and 15 samples (12.3%) were found nothing. The detection curves of positive samples resulted from bacterial growth showed ascent. In conclusion, maintenance of temperature stability and avoidance of temperature fluctuation in incubator could decrease the occurrence of false-positive reaction in detection process. The reaction flasks with positive results at initial culture should be recultured, and whether existence of a sharply ascending logarilhimic growth phase in bacterial growth curve should be further detected, which are helpful to distinguish false-positive reactions from true positive, and thus increase the specificity of the BacT/ALERT system.
Collins, Jeffrey M; Hunter, Mary; Gordon, Wanda; Kempker, Russell R; Blumberg, Henry M; Ray, Susan M
2018-06-01
Following large declines in tuberculosis transmission the United States, large-scale screening programs targeting low-risk healthcare workers are increasingly a source of false-positive results. We report a large cluster of presumed false-positive tuberculin skin test results in healthcare workers following a change to 50-dose vials of Tubersol tuberculin.Infect Control Hosp Epidemiol 2018;39:750-752.
Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive ...
Algorithm of reducing the false positives in IDS based on correlation Analysis
NASA Astrophysics Data System (ADS)
Liu, Jianyi; Li, Sida; Zhang, Ru
2018-03-01
This paper proposes an algorithm of reducing the false positives in IDS based on correlation Analysis. Firstly, the algorithm analyzes the distinguishing characteristics of false positives and real alarms, and preliminary screen the false positives; then use the method of attribute similarity clustering to the alarms and further reduces the amount of alarms; finally, according to the characteristics of multi-step attack, associated it by the causal relationship. The paper also proposed a reverse causation algorithm based on the attack association method proposed by the predecessors, turning alarm information into a complete attack path. Experiments show that the algorithm simplifies the number of alarms, improve the efficiency of alarm processing, and contribute to attack purposes identification and alarm accuracy improvement.
Wu, Shan; Zhang, Xiaofeng; Shuai, Jiangbing; Li, Ke; Yu, Huizhen; Jin, Chenchen
2016-07-04
To simplify the PNA-FISH (Peptide nucleic acid-fluorescence in situ hybridization) test, molecular beacon based PNA probe combined with fluorescence scanning detection technology was applied to replace the original microscope observation to detect Listeria monocytogenes The 5′ end and 3′ end of the L. monocytogenes specific PNA probes were labeled with the fluorescent group and the quenching group respectively, to form a molecular beacon based PNA probe. When PNA probe used for fluorescence scanning and N1 treatment as the control, the false positive rate was 11.4%, and the false negative rate was 0; when N2 treatment as the control, the false positive rate decreased to 4.3%, but the false negative rate rose to 18.6%. When beacon based PNA probe used for fluorescence scanning, taken N1 treatment as blank control, the false positive rate was 8.6%, and the false negative rate was 1.4%; taken N2 treatment as blank control, the false positive rate was 5.7%, and the false negative rate was 1.4%. Compared with PNA probe, molecular beacon based PNA probe can effectively reduce false positives and false negatives. The success rates of hybridization of the two PNA probes were 83.3% and 95.2% respectively; and the rates of the two beacon based PNA probes were 91.7% and 90.5% respectively, which indicated that labeling the both ends of the PNA probe dose not decrease the hybridization rate with the target bacteria. The combination of liquid phase PNA-FISH and fluorescence scanning method, can significantly improve the detection efficiency.
Automated frequency analysis of synchronous and diffuse sleep spindles.
Huupponen, Eero; Saastamoinen, Antti; Niemi, Jukka; Virkkala, Jussi; Hasan, Joel; Värri, Alpo; Himanen, Sari-Leena
2005-01-01
Sleep spindles have different properties in different localizations in the cortex. First main objective was to develop an amplitude-independent multi-channel spindle detection method. Secondly the method was applied to study the anteroposterior frequency differences of pure synchronous (visible bilaterally, either frontopolarly or centrally) and diffuse (visible bilaterally both frontopolarly and centrally) sleep spindles. A previously presented spindle detector based on the fuzzy reasoning principle and a level detector were combined to form a multi-channel spindle detector. The spindle detector had a 76.17% true positive rate and 0.93% false-positive rate. Pure central spindles were faster and pure frontal spindles were slower than diffuse spindles measured simultaneously from both locations. The study of frequency relations of spindles might give new information about thalamocortical sleep spindle generating mechanisms. Copyright (c) 2005 S. Karger AG, Basel.
Implications of false-positive results for future cancer screenings.
Taksler, Glen B; Keating, Nancy L; Rothberg, Michael B
2018-06-01
False-positive cancer screening results may affect a patient's willingness to obtain future screening. The authors conducted logistic regression analysis of 450,484 person-years of electronic medical records (2006-2015) in 92,405 individuals aged 50 to 75 years. Exposures were false-positive breast, prostate, or colorectal cancer screening test results (repeat breast imaging or negative breast biopsy ≤3 months after screening mammography, repeat prostate-specific antigen [PSA] test ≤3 months after PSA test result ≥4.0 ng/mL or negative prostate biopsy ≤3 months after any PSA result, or negative colonoscopy [without biopsy/polypectomy] ≤6 months after a positive fecal occult blood test). Outcomes were up-to-date status with breast or colorectal cancer screening. Covariates included prior screening history, clinical information (eg, family history, obesity, and smoking status), comorbidity, and demographics. Women were more likely to be up to date with breast cancer screening if they previously had false-positive mammography findings (adjusted odds ratio [AOR], 1.43 [95% confidence interval, 1.34-1.51] without breast biopsy and AOR, 2.02 [95% confidence interval, 1.56-2.62] with breast biopsy; both P<.001). The same women were more likely to be up to date with colorectal cancer screening (AOR range, 1.25-1.47 depending on breast biopsy; both P<.001). Men who previously had false-positive PSA testing were more likely to be up to date with colorectal cancer screening (AOR, 1.22 [P = .039] without prostate imaging/biopsy and AOR, 1.60 [P = .028] with imaging/biopsy). Results were stronger for individuals with more false-positive results (all P≤.005). However, women with previous false-positive colorectal cancer fecal occult blood test screening results were found to be less likely to be up to date with breast cancer screening (AOR, 0.73; P<.001). Patients who previously had a false-positive breast or prostate cancer screening test were more likely to engage in future screening. Cancer 2018;124:2390-8. © 2018 American Cancer Society. © 2018 American Cancer Society.
Case Reports of Aripiprazole Causing False-Positive Urine Amphetamine Drug Screens in Children.
Kaplan, Justin; Shah, Pooja; Faley, Brian; Siegel, Mark E
2015-12-01
Urine drug screens (UDSs) are used to identify the presence of certain medications. One limitation of UDSs is the potential for false-positive results caused by cross-reactivity with other substances. Amphetamines have an extensive list of cross-reacting medications. The literature contains reports of false-positive amphetamine UDSs with multiple antidepressants and antipsychotics. We present 2 cases of presumed false-positive UDSs for amphetamines after ingestion of aripiprazole. Case 1 was a 16-month-old girl who accidently ingested 15 to 45 mg of aripiprazole. She was lethargic and ataxic at home with 1 episode of vomiting containing no identifiable tablets. She remained sluggish with periods of irritability and was admitted for observation. UDS on 2 consecutive days came back positive for amphetamines. Case 2 was of a 20-month-old girl who was brought into the hospital after accidental ingestion of an unknown quantity of her father's medications which included aripiprazole. UDS on the first day of admission came back positive only for amphetamines. Confirmatory testing with gas chromatography-mass spectrometry (GC-MS) on the blood and urine samples were also performed for both patients on presentation to detect amphetamines and were subsequently negative. Both patients returned to baseline and were discharged from the hospital. To our knowledge, these cases represent the first reports of false-positive amphetamine urine drug tests with aripiprazole. In both cases, aripiprazole was the drug with the highest likelihood of causing the positive amphetamine screen. The implications of these false-positives include the possibility of unnecessary treatment and monitoring of patients. Copyright © 2015 by the American Academy of Pediatrics.
Snyder, James W.; Munier, Gina K.; Johnson, Charles L.
2010-01-01
This study compared the BD GeneOhm methicillin-resistant Staphylococcus aureus (MRSA) real-time PCR assay to culture by the use of BBL CHROMagar MRSA for the detection of MRSA in 627 nasal surveillance specimens collected from intensive care unit (ICU) patients. The PCR assay had a sensitivity, specificity, positive predictive value, and negative predictive value of 100%, 96.7%, 70.3%, and 100%, respectively. Nine of 19 false-positive PCR specimens grew methicillin-susceptible S. aureus (MSSA) from broth enrichment culture, of which two demonstrated evidence of mecA gene dropout. Compared to culture by the use of BBL CHROMagar MRSA, the BD GeneOhm MRSA PCR assay demonstrated sensitivity and specificity above 95% for the detection of MRSA nasal colonization and provided shorter turnaround time in generating positive and negative final results. PMID:20181916
Román, R.; Sala, M.; Salas, D.; Ascunce, N.; Zubizarreta, R.; Castells, X.
2012-01-01
Background: Reducing the false-positive risk in breast cancer screening is important. We examined how the screening-protocol and women's characteristics affect the cumulative false-positive risk. Methods: This is a retrospective cohort study of 1 565 364 women aged 45–69 years who underwent 4 739 498 screening mammograms from 1990 to 2006. Multilevel discrete hazard models were used to estimate the cumulative false-positive risk over 10 sequential mammograms under different risk scenarios. Results: The factors affecting the false-positive risk for any procedure and for invasive procedures were double mammogram reading [odds ratio (OR) = 2.06 and 4.44, respectively], two mammographic views (OR = 0.77 and 1.56, respectively), digital mammography (OR = 0.83 for invasive procedures), premenopausal status (OR = 1.31 and 1.22, respectively), use of hormone replacement therapy (OR = 1.03 and 0.84, respectively), previous invasive procedures (OR = 1.52 and 2.00, respectively), and a familial history of breast cancer (OR = 1.18 and 1.21, respectively). The cumulative false-positive risk for women who started screening at age 50–51 was 20.39% [95% confidence interval (CI) 20.02–20.76], ranging from 51.43% to 7.47% in the highest and lowest risk profiles, respectively. The cumulative risk for invasive procedures was 1.76% (95% CI 1.66–1.87), ranging from 12.02% to 1.58%. Conclusions: The cumulative false-positive risk varied widely depending on the factors studied. These findings are relevant to provide women with accurate information and to improve the effectiveness of screening programs. PMID:21430183
Grégoire, Y; Germain, M; Delage, G
2018-05-01
Since 25 May 2010, all donors at our blood centre who tested false-positive for HIV, HBV, HCV or syphilis are eligible for re-entry after further testing. Donors who have a second false-positive screening test, either during qualification for or after re-entry, are deferred for life. This study reports on factors associated with the occurrence of such deferrals. Rates of second false-positive results were compared by year of deferral, transmissible disease marker, gender, age, donor status (new or repeat) and testing platform (same or different) both at qualification for re-entry and afterwards. Chi-square tests were used to compare proportions. Cox regression was used for multivariate analyses. Participation rates in the re-entry programme were 42·1%: 25·6% failed to qualify for re-entry [different platform: 2·7%; same platform: 42·9% (P < 0·0001)]. After re-entry, rates of deferral for second false-positive results were 8·4% after 3 years [different platform: 1·8%; same platform: 21·4% (P < 0·0001)]. Deferral rates were higher for HIV and HCV than for HBV at qualification when tested on the same platform. The risk, when analysed by multivariate analyses, of a second deferral for a false-positive result, both at qualification and 3 years after re-entry, was lower for donors deferred on a different platform; this risk was higher for HIV, HCV and syphilis than for HBV and for new donors if tested on the same platform. Re-entry is more often successful when donors are tested on a testing platform different from the one on which they obtained their first false-positive result. © 2018 International Society of Blood Transfusion.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2016-06-14
A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of Technologies for Early Detection and Stratification of Breast Cancer
2012-10-01
at the time of screening, and has an 8-10% false positive rate.3 These drawbacks lead to inaccurate patient diagnosis, which can allow potentially...95% recovery efficiency. Furthermore, using whole blood from healthy donors, we determined we have a zero false positive rate; that is, we have not...detected a single false positive event out of the dozen samples we ran. The technology we developed here is not only useful for the isolation of CTCs
Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive ...
Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT
Guo, Wei; Li, Qiang
2014-01-01
Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
False Positives in Exoplanet Detection
NASA Astrophysics Data System (ADS)
Leuquire, Jacob; Kasper, David; Jang-Condell, Hannah; Kar, Aman; Sorber, Rebecca; Suhaimi, Afiq; KELT (Kilodegree Extremely Little Telescope)
2018-06-01
Our team at the University of Wyoming uses a 0.6 m telescope at RBO (Red Buttes Observatory) to help confirm results on potential exoplanet candidates from low resolution, wide field surveys shared by the KELT (Kilodegree Extremely Little Telescope) team. False positives are common in this work. We carry out transit photometry, and this method comes with special types of false positives. The most common false positive seen at the confirmation level is an EB (eclipsing binary). Low resolution images are great in detecting multiple sources for photometric dips in light curves, but they lack the precision to decipher single targets at an accurate level. For example, target star KC18C030621 needed RBO’s photometric precision to determine there was a nearby EB causing exoplanet type light curves. Identifying false positives with our telescope is important work because it helps eliminate the waste of time taken by more expensive telescopes trying to rule out negative candidate stars. It also furthers the identification of other types of photometric events, like eclipsing binaries, so they can be studied on their own.
Oxybuprocaine induces a false-positive response in immunochromatographic SAS Adeno Test.
Hoshino, Takeshi; Takanashi, Taiji; Okada, Morio; Uchida, Sunao
2002-04-01
To investigate whether a solution of oxybuprocaine hydrochloride, 0.4%, results in a false-positive response in an immunochromatographic SAS Adeno Test. Experimental study. Physiologic saline and 2% lidocaine. Each chemical (100 microl) was diluted in a transport medium. Five drops (200 microl) of the resultant solution were dispensed into the round sample well of a test device. Fifteen samples were tested in each group. Ten minutes after the start of the test, a colored line in the "specimen" portion of the test membrane was visually read as positive or negative by a masked technician. No positive reaction was observed in the control groups (physiologic saline and lidocaine). A false-positive reaction was observed in six samples (33.3%) in the oxybuprocaine group. The positive rate was significantly higher in the oxybuprocaine group compared with those in control groups (P = 0.0062, Fisher's extract probability test). Oxybuprocaine may induce a false-positive reaction in an immunochromatographic SAS Adeno Test. We recommend the use of lidocaine, instead of oxybuprocaine, for local anesthesia in taking eye swabs from patients with suspected adenovirus infection.
Visual field progression in glaucoma: what is the specificity of the Guided Progression Analysis?
Artes, Paul H; O'Leary, Neil; Nicolela, Marcelo T; Chauhan, Balwantray C; Crabb, David P
2014-10-01
To estimate the specificity of the Guided Progression Analysis (GPA) (Carl Zeiss Meditec, Dublin, CA) in individual patients with glaucoma. Observational cohort study. Thirty patients with open-angle glaucoma. In 30 patients with open-angle glaucoma, 1 eye (median mean deviation [MD], -2.5 decibels [dB]; interquartile range, -4.4 to -1.3 dB) was tested 12 times over 3 months (Humphrey Field Analyzer, Carl Zeiss Meditec; SITA Standard, 24-2). "Possible progression" and "likely progression" were determined with the GPA. These analyses were repeated after the order of the tests had been randomly rearranged (1000 unique permutations). Rate of false-positive alerts of "possible progression" and "likely progression" with the GPA. On average, the specificity of the GPA "likely progression" alert was high-for the entire sample, the mean rate of false-positive alerts after 10 follow-up tests was 2.6%. With "possible progression," the specificity was considerably lower (false-positive rate, 18.5%). Most important, the cumulative rate of false-positive alerts varied substantially among patients, from <1% to 80% with "possible progression" and from <0.1% to 20% with "likely progression." Factors associated with false-positive alerts were visual field variability (standard deviation of MD, Spearman's rho = 0.41, P<0.001) and the reliability indices (proportion of false-positive and false-negative responses, fixation losses, rho>0.31, P≤0.10). On average, progression criteria currently used in the GPA have high specificity, but some patients are more likely to show false-positive alerts than others. This is a natural consequence of population-based change criteria and may not matter in clinical trials and studies in which large groups of patients are compared. However, it must be considered when the GPA is used in clinical practice where specificity needs to be controlled for individual patients. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Do positive schizotypal symptoms predict false perceptual experiences in nonclinical populations?
Tsakanikos, Elias; Reed, Phil
2005-12-01
We examined whether positive schizotypy (i.e., reports of hallucinatory and delusional-like experiences) in nonclinical participants could predict false perceptual experiences during detection of fast-moving words beyond a possible response bias. The participants (N = 160) were assigned to one of two conditions: they were asked either to make presence/absence judgments (loose criterion) or to read aloud every detected word (strict criterion). Regression analysis showed that high levels of positive schizotypy predicted false alarms in the loose condition and false perceptions of words in the strict condition. The obtained effects were independent of detection accuracy, task order, impulsivity, and social desirability. We discuss the results in the context of information processing biases linked to the positive symptomatology of schizophrenia. Clinical and theoretical implications are also considered.
False Memory in Adults With ADHD: A Comparison Between Subtypes and Normal Controls.
Soliman, Abdrabo Moghazy; Elfar, Rania Mohamed
2017-10-01
To examine the performance on the Deese-Roediger-McDermott task of adults divided into ADHD subtypes and compares their performance to that of healthy controls to examine whether adults with ADHD are more susceptible to the production of false memories under experimental conditions. A total of 128 adults with ADHD (50% females), classified into three Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV-TR) subtypes, were compared with 48 controls. The results indicated that the ADHD participants recalled and recognized fewer studied words than the controls, the ADHD groups produced more false memories than the control group, no differences in either the false positives or the false negatives. The ADHD-combined (ADHD-CT) group recognized significantly more critical words than the control, ADHD-predominantly inattentive (ADHD-IA), and ADHD-predominantly hyperactive-impulsive (ADHD-HI) groups. The ADHD groups recalled and recognized more false positives, were more confident in their false responses, and displayed more knowledge corruption than the controls. The ADHD-CT group recalled and recognized more false positives than the other ADHD groups. The adults with ADHD have more false memories than the controls and that false memory formation varied with the ADHD subtypes.
Coarse-to-fine deep neural network for fast pedestrian detection
NASA Astrophysics Data System (ADS)
Li, Yaobin; Yang, Xinmei; Cao, Lijun
2017-11-01
Pedestrian detection belongs to a category of object detection is a key issue in the field of video surveillance and automatic driving. Although recent object detection methods, such as Fast/Faster RCNN, have achieved excellent performance, it is difficult to meet real-time requirements and limits the application in real scenarios. A coarse-to-fine deep neural network for fast pedestrian detection is proposed in this paper. Two-stage approach is presented to realize fine trade-off between accuracy and speed. In the coarse stage, we train a fast deep convolution neural network to generate most pedestrian candidates at the cost of a number of false positives. The detector can cover the majority of scales, sizes, and occlusions of pedestrians. After that, a classification network is introduced to refine the pedestrian candidates generated from the previous stage. Refining through classification network, most of false detections will be excluded easily and the final pedestrian predictions with bounding box and confidence score are produced. Competitive results have been achieved on INRIA dataset in terms of accuracy, especially the method can achieve real-time detection that is faster than the previous leading methods. The effectiveness of coarse-to-fine approach to detect pedestrians is verified, and the accuracy and stability are also improved.
VarBin, a novel method for classifying true and false positive variants in NGS data
2013-01-01
Background Variant discovery for rare genetic diseases using Illumina genome or exome sequencing involves screening of up to millions of variants to find only the one or few causative variant(s). Sequencing or alignment errors create "false positive" variants, which are often retained in the variant screening process. Methods to remove false positive variants often retain many false positive variants. This report presents VarBin, a method to prioritize variants based on a false positive variant likelihood prediction. Methods VarBin uses the Genome Analysis Toolkit variant calling software to calculate the variant-to-wild type genotype likelihood ratio at each variant change and position divided by read depth. The resulting Phred-scaled, likelihood-ratio by depth (PLRD) was used to segregate variants into 4 Bins with Bin 1 variants most likely true and Bin 4 most likely false positive. PLRD values were calculated for a proband of interest and 41 additional Illumina HiSeq, exome and whole genome samples (proband's family or unrelated samples). At variant sites without apparent sequencing or alignment error, wild type/non-variant calls cluster near -3 PLRD and variant calls typically cluster above 10 PLRD. Sites with systematic variant calling problems (evident by variant quality scores and biases as well as displayed on the iGV viewer) tend to have higher and more variable wild type/non-variant PLRD values. Depending on the separation of a proband's variant PLRD value from the cluster of wild type/non-variant PLRD values for background samples at the same variant change and position, the VarBin method's classification is assigned to each proband variant (Bin 1 to Bin 4). Results To assess VarBin performance, Sanger sequencing was performed on 98 variants in the proband and background samples. True variants were confirmed in 97% of Bin 1 variants, 30% of Bin 2, and 0% of Bin 3/Bin 4. Conclusions These data indicate that VarBin correctly classifies the majority of true variants as Bin 1 and Bin 3/4 contained only false positive variants. The "uncertain" Bin 2 contained both true and false positive variants. Future work will further differentiate the variants in Bin 2. PMID:24266885
Combining multiple ChIP-seq peak detection systems using combinatorial fusion.
Schweikert, Christina; Brown, Stuart; Tang, Zuojian; Smith, Phillip R; Hsu, D Frank
2012-01-01
Due to the recent rapid development in ChIP-seq technologies, which uses high-throughput next-generation DNA sequencing to identify the targets of Chromatin Immunoprecipitation, there is an increasing amount of sequencing data being generated that provides us with greater opportunity to analyze genome-wide protein-DNA interactions. In particular, we are interested in evaluating and enhancing computational and statistical techniques for locating protein binding sites. Many peak detection systems have been developed; in this study, we utilize the following six: CisGenome, MACS, PeakSeq, QuEST, SISSRs, and TRLocator. We define two methods to merge and rescore the regions of two peak detection systems and analyze the performance based on average precision and coverage of transcription start sites. The results indicate that ChIP-seq peak detection can be improved by fusion using score or rank combination. Our method of combination and fusion analysis would provide a means for generic assessment of available technologies and systems and assist researchers in choosing an appropriate system (or fusion method) for analyzing ChIP-seq data. This analysis offers an alternate approach for increasing true positive rates, while decreasing false positive rates and hence improving the ChIP-seq peak identification process.
Dehon, Hedwige; Larøi, Frank; Van der Linden, Martial
2010-10-01
This study examined the influence of emotional valence on the production of DRM false memories (Roediger & McDermott, 1995). Participants were presented with neutral, positive, or negative DRM lists for a later recognition (Experiment 1) or recall (Experiment 2) test. In both experiments, confidence and recollective experience (i.e., "Remember-Know" judgments; Tulving, 1985) were also assessed. Results consistently showed that, compared with neutral lists, affective lists induced more false recognition and recall of nonpresented critical lures. Moreover, although confidence ratings did not differ between the false remembering from the different kinds of lists, "Remember" responses were more often associated with negative than positive and neutral false remembering of the critical lures. In contrast, positive false remembering of the critical lures was more often associated with "Know" responses. These results are discussed in light of the Paradoxical Negative Emotion (PNE) hypothesis (Porter, Taylor, & ten Bricke, 2008). (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Lilford, Richard J; Bentham, Louise M; Armstrong, Matthew J; Neuberger, James; Girling, Alan J
2013-06-20
Evaluation of predictive value of liver function tests (LFTs) for the detection of liver-related disease in primary care. A prospective observational study. 11 UK primary care practices. Patients (n=1290) with an abnormal eight-panel LFT (but no previously diagnosed liver disease). Patients were investigated by recording clinical features, and repeating LFTs, specific tests for individual liver diseases, and abdominal ultrasound scan. Patients were characterised as having: hepatocellular disease; biliary disease; tumours of the hepato-biliary system and none of the above. The relationship between LFT results and disease categories was evaluated by stepwise regression and logistic discrimination, with adjustment for demographic and clinical factors. True and False Positives generated by all possible LFT combinations were compared with a view towards optimising the choice of analytes in the routine LFT panel. Regression methods showed that alanine aminotransferase (ALT) was associated with hepatocellular disease (32 patients), while alkaline phosphatase (ALP) was associated with biliary disease (12 patients) and tumours of the hepatobiliary system (9 patients). A restricted panel of ALT and ALP was an efficient choice of analytes, comparing favourably with the complete panel of eight analytes, provided that 48 False Positives can be tolerated to obtain one additional True Positive. Repeating a complete panel in response to an abnormal reading is not the optimal strategy. The LFT panel can be restricted to ALT and ALP when the purpose of testing is to exclude liver disease in primary care.
Crowdsourcing lung nodules detection and annotation
NASA Astrophysics Data System (ADS)
Boorboor, Saeed; Nadeem, Saad; Park, Ji Hwan; Baker, Kevin; Kaufman, Arie
2018-03-01
We present crowdsourcing as an additional modality to aid radiologists in the diagnosis of lung cancer from clinical chest computed tomography (CT) scans. More specifically, a complete work flow is introduced which can help maximize the sensitivity of lung nodule detection by utilizing the collective intelligence of the crowd. We combine the concept of overlapping thin-slab maximum intensity projections (TS-MIPs) and cine viewing to render short videos that can be outsourced as an annotation task to the crowd. These videos are generated by linearly interpolating overlapping TS-MIPs of CT slices through the depth of each quadrant of a patient's lung. The resultant videos are outsourced to an online community of non-expert users who, after a brief tutorial, annotate suspected nodules in these video segments. Using our crowdsourcing work flow, we achieved a lung nodule detection sensitivity of over 90% for 20 patient CT datasets (containing 178 lung nodules with sizes between 1-30mm), and only 47 false positives from a total of 1021 annotations on nodules of all sizes (96% sensitivity for nodules>4mm). These results show that crowdsourcing can be a robust and scalable modality to aid radiologists in screening for lung cancer, directly or in combination with computer-aided detection (CAD) algorithms. For CAD algorithms, the presented work flow can provide highly accurate training data to overcome the high false-positive rate (per scan) problem. We also provide, for the first time, analysis on nodule size and position which can help improve CAD algorithms.
2006-10-01
lead to false positive segmental hair analysis results.13 Due to the increased risk of false positives associated with segmental hair analysis ...to 200 mg of hair (to allow confirmation testing). 7 The segments are typically washed to remove external contaminants and the chemicals in the hair ...further confirmation. The method overcomes the false positives associated with traditional segmental hair analysis such. By measuring the
Pavletic, Adriana J; Marques, Adriana R
2017-07-15
False-positive serology for Lyme disease was reported in patients with acute infectious mononucleosis. Here we describe 2 patients with early disseminated Lyme disease who were misdiagnosed with infectious mononucleosis based on false-positive tests for primary Epstein-Barr virus infection. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Point-of-care urine tests for smoking status and isoniazid treatment monitoring in adult patients.
Nicolau, Ioana; Tian, Lulu; Menzies, Dick; Ostiguy, Gaston; Pai, Madhukar
2012-01-01
Poor adherence to isoniazid (INH) preventive therapy (IPT) is an impediment to effective control of latent tuberculosis (TB) infection. TB patients who smoke are at higher risk of latent TB infection, active disease, and TB mortality, and may have lower adherence to their TB medications. The objective of our study was to validate IsoScreen and SmokeScreen (GFC Diagnostics, UK), two point-of-care tests for monitoring INH intake and determining smoking status. The tests could be used together in the same individual to help identify patients with a high-risk profile and provide a tailored treatment plan that includes medication management, adherence interventions, and smoking cessation programs. 200 adult outpatients attending the TB and/or the smoking cessation clinic were recruited at the Montreal Chest Institute. Sensitivity and specificity were measured for each test against the corresponding composite reference standard. Test reliability was measured using kappa statistic for intra-rater and inter-rater agreement. Univariate and multivariate logistic regression models were used to explore possible covariates that might be related to false-positive and false-negative test results. IsoScreen had a sensitivity of 93.2% (95% confidence interval [CI] 80.3, 98.2) and specificity of 98.7% (94.8, 99.8). IsoScreen had intra-rater agreement (kappa) of 0.75 (0.48, 0.94) and inter-rater agreement of 0.61 (0.27, 0.90). SmokeScreen had a sensitivity of 69.2% (56.4, 79.8), specificity of 81.6% (73.0, 88.0), intra-rater agreement of 0.77 (0.56, 0.94), and inter-rater agreement of 0.66 (0.42, 0.88). False-positive SmokeScreen tests were strongly associated with INH treatment. IsoScreen had high validity and reliability, whereas SmokeScreen had modest validity and reliability. SmokeScreen tests did not perform well in a population receiving INH due to the association between INH treatment and false-positive SmokeScreen test results. Development of the next generation SmokeScreen assay should account for this potential interference.
Point-of-Care Urine Tests for Smoking Status and Isoniazid Treatment Monitoring in Adult Patients
Nicolau, Ioana; Tian, Lulu; Menzies, Dick; Ostiguy, Gaston; Pai, Madhukar
2012-01-01
Background Poor adherence to isoniazid (INH) preventive therapy (IPT) is an impediment to effective control of latent tuberculosis (TB) infection. TB patients who smoke are at higher risk of latent TB infection, active disease, and TB mortality, and may have lower adherence to their TB medications. The objective of our study was to validate IsoScreen and SmokeScreen (GFC Diagnostics, UK), two point-of-care tests for monitoring INH intake and determining smoking status. The tests could be used together in the same individual to help identify patients with a high-risk profile and provide a tailored treatment plan that includes medication management, adherence interventions, and smoking cessation programs. Methodology/Principal Findings 200 adult outpatients attending the TB and/or the smoking cessation clinic were recruited at the Montreal Chest Institute. Sensitivity and specificity were measured for each test against the corresponding composite reference standard. Test reliability was measured using kappa statistic for intra-rater and inter-rater agreement. Univariate and multivariate logistic regression models were used to explore possible covariates that might be related to false-positive and false-negative test results. IsoScreen had a sensitivity of 93.2% (95% confidence interval [CI] 80.3, 98.2) and specificity of 98.7% (94.8, 99.8). IsoScreen had intra-rater agreement (kappa) of 0.75 (0.48, 0.94) and inter-rater agreement of 0.61 (0.27, 0.90). SmokeScreen had a sensitivity of 69.2% (56.4, 79.8), specificity of 81.6% (73.0, 88.0), intra-rater agreement of 0.77 (0.56, 0.94), and inter-rater agreement of 0.66 (0.42, 0.88). False-positive SmokeScreen tests were strongly associated with INH treatment. Conclusions IsoScreen had high validity and reliability, whereas SmokeScreen had modest validity and reliability. SmokeScreen tests did not perform well in a population receiving INH due to the association between INH treatment and false-positive SmokeScreen test results. Development of the next generation SmokeScreen assay should account for this potential interference. PMID:23029310
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Mitsui, Jun; Fukuda, Yoko; Azuma, Kyo; Tozaki, Hirokazu; Ishiura, Hiroyuki; Takahashi, Yuji; Goto, Jun; Tsuji, Shoji
2010-07-01
We have recently found that multiple rare variants of the glucocerebrosidase gene (GBA) confer a robust risk for Parkinson disease, supporting the 'common disease-multiple rare variants' hypothesis. To develop an efficient method of identifying rare variants in a large number of samples, we applied multiplexed resequencing using a next-generation sequencer to identification of rare variants of GBA. Sixteen sets of pooled DNAs from six pooled DNA samples were prepared. Each set of pooled DNAs was subjected to polymerase chain reaction to amplify the target gene (GBA) covering 6.5 kb, pooled into one tube with barcode indexing, and then subjected to extensive sequence analysis using the SOLiD System. Individual samples were also subjected to direct nucleotide sequence analysis. With the optimization of data processing, we were able to extract all the variants from 96 samples with acceptable rates of false-positive single-nucleotide variants.
Positive events protect children from causal false memories for scripted events.
Melinder, Annika; Toffalini, Enrico; Geccherle, Eleonora; Cornoldi, Cesare
2017-11-01
Adults produce fewer inferential false memories for scripted events when their conclusions are emotionally charged than when they are neutral, but it is not clear whether the same effect is also found in children. In the present study, we examined this issue in a sample of 132 children aged 6-12 years (mean 9 years, 3 months). Participants encoded photographs depicting six script-like events that had a positively, negatively, or a neutral valenced ending. Subsequently, true and false recognition memory of photographs related to the observed scripts was tested as a function of emotionality. Causal errors-a type of false memory thought to stem from inferential processes-were found to be affected by valence: children made fewer causal errors for positive than for neutral or negative events. Hypotheses are proposed on why adults were found protected against inferential false memories not only by positive (as for children) but also by negative endings when administered similar versions of the same paradigm.
Expanding the scope of noninvasive prenatal testing: detection of fetal microdeletion syndromes.
Wapner, Ronald J; Babiarz, Joshua E; Levy, Brynn; Stosic, Melissa; Zimmermann, Bernhard; Sigurjonsson, Styrmir; Wayham, Nicholas; Ryan, Allison; Banjevic, Milena; Lacroute, Phil; Hu, Jing; Hall, Megan P; Demko, Zachary; Siddiqui, Asim; Rabinowitz, Matthew; Gross, Susan J; Hill, Matthew; Benn, Peter
2015-03-01
The purpose of this study was to estimate the performance of a single-nucleotide polymorphism (SNP)-based noninvasive prenatal test for 5 microdeletion syndromes. Four hundred sixty-nine samples (358 plasma samples from pregnant women, 111 artificial plasma mixtures) were amplified with the use of a massively multiplexed polymerase chain reaction, sequenced, and analyzed with the use of the Next-generation Aneuploidy Test Using SNPs algorithm for the presence or absence of deletions of 22q11.2, 1p36, distal 5p, and the Prader-Willi/Angelman region. Detection rates were 97.8% for a 22q11.2 deletion (45/46) and 100% for Prader-Willi (15/15), Angelman (21/21), 1p36 deletion (1/1), and cri-du-chat syndromes (24/24). False-positive rates were 0.76% for 22q11.2 deletion syndrome (3/397) and 0.24% for cri-du-chat syndrome (1/419). No false positives occurred for Prader-Willi (0/428), Angelman (0/442), or 1p36 deletion syndromes (0/422). SNP-based noninvasive prenatal microdeletion screening is highly accurate. Because clinically relevant microdeletions and duplications occur in >1% of pregnancies, regardless of maternal age, noninvasive screening for the general pregnant population should be considered. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation
Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.
2013-01-01
The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379
A Realistic Seizure Prediction Study Based on Multiclass SVM.
Direito, Bruno; Teixeira, César A; Sales, Francisco; Castelo-Branco, Miguel; Dourado, António
2017-05-01
A patient-specific algorithm, for epileptic seizure prediction, based on multiclass support-vector machines (SVM) and using multi-channel high-dimensional feature sets, is presented. The feature sets, combined with multiclass classification and post-processing schemes aim at the generation of alarms and reduced influence of false positives. This study considers 216 patients from the European Epilepsy Database, and includes 185 patients with scalp EEG recordings and 31 with intracranial data. The strategy was tested over a total of 16,729.80[Formula: see text]h of inter-ictal data, including 1206 seizures. We found an overall sensitivity of 38.47% and a false positive rate per hour of 0.20. The performance of the method achieved statistical significance in 24 patients (11% of the patients). Despite the encouraging results previously reported in specific datasets, the prospective demonstration on long-term EEG recording has been limited. Our study presents a prospective analysis of a large heterogeneous, multicentric dataset. The statistical framework based on conservative assumptions, reflects a realistic approach compared to constrained datasets, and/or in-sample evaluations. The improvement of these results, with the definition of an appropriate set of features able to improve the distinction between the pre-ictal and nonpre-ictal states, hence minimizing the effect of confounding variables, remains a key aspect.
Ensemble candidate classification for the LOTAAS pulsar survey
NASA Astrophysics Data System (ADS)
Tan, C. M.; Lyon, R. J.; Stappers, B. W.; Cooper, S.; Hessels, J. W. T.; Kondratiev, V. I.; Michilli, D.; Sanidas, S.
2018-03-01
One of the biggest challenges arising from modern large-scale pulsar surveys is the number of candidates generated. Here, we implemented several improvements to the machine learning (ML) classifier previously used by the LOFAR Tied-Array All-Sky Survey (LOTAAS) to look for new pulsars via filtering the candidates obtained during periodicity searches. To assist the ML algorithm, we have introduced new features which capture the frequency and time evolution of the signal and improved the signal-to-noise calculation accounting for broad profiles. We enhanced the ML classifier by including a third class characterizing RFI instances, allowing candidates arising from RFI to be isolated, reducing the false positive return rate. We also introduced a new training data set used by the ML algorithm that includes a large sample of pulsars misclassified by the previous classifier. Lastly, we developed an ensemble classifier comprised of five different Decision Trees. Taken together these updates improve the pulsar recall rate by 2.5 per cent, while also improving the ability to identify pulsars with wide pulse profiles, often misclassified by the previous classifier. The new ensemble classifier is also able to reduce the percentage of false positive candidates identified from each LOTAAS pointing from 2.5 per cent (˜500 candidates) to 1.1 per cent (˜220 candidates).
Ning, Dianhua; He, Changtian; Liu, Zhengjie; Liu, Cui; Wu, Qilong; Zhao, TingTing; Liu, Renyong
2017-05-21
Human telomerase RNA (hTR), which is one component of telomerase, was deemed to be a biomarker to monitor tumor cells due to its different expression levels in tumor cells and normal somatic cells. Thus far, plentiful fluorescent probes have been designed to investigate nucleic acids. However, most of them are limited since they are time-consuming, require professional operators and even result in false positive signals in the cellular environment. Herein, we report a dual-colored ratiometric-fluorescent oligonucleotide probe to achieve the reliable detection of human telomerase RNA in cell extracts. The probe is constructed using a dual-labeled fluorescent oligonucleotide hybridized with target-complemented Dabcyl-labeled oligonucleotide. In the presence of the target, the dual-labeled fluorescent oligonucleotide translates into a hairpin structure, which leads to the generation of the fluorescence resonance energy transfer (FRET) phenomenon under UV excitation. Compared to conventional methods, this strategy could effectively avoid false positive signals, and it not only possesses the advantages of simplicity and high specificity but also has the merits of signal stability and distinguishable color variation. Moreover, the quantitative assay of hTR would have a far-reaching impact on the telomerase mechanism and even tumor diagnosis research.
Kamps-Hughes, Nick; McUsic, Andrew; Kurihara, Laurie; Harkins, Timothy T.; Pal, Prithwish; Ray, Claire
2018-01-01
The accurate detection of ultralow allele frequency variants in DNA samples is of interest in both research and medical settings, particularly in liquid biopsies where cancer mutational status is monitored from circulating DNA. Next-generation sequencing (NGS) technologies employing molecular barcoding have shown promise but significant sensitivity and specificity improvements are still needed to detect mutations in a majority of patients before the metastatic stage. To address this we present analytical validation data for ERASE-Seq (Elimination of Recurrent Artifacts and Stochastic Errors), a method for accurate and sensitive detection of ultralow frequency DNA variants in NGS data. ERASE-Seq differs from previous methods by creating a robust statistical framework to utilize technical replicates in conjunction with background error modeling, providing a 10 to 100-fold reduction in false positive rates compared to published molecular barcoding methods. ERASE-Seq was tested using spiked human DNA mixtures with clinically realistic DNA input quantities to detect SNVs and indels between 0.05% and 1% allele frequency, the range commonly found in liquid biopsy samples. Variants were detected with greater than 90% sensitivity and a false positive rate below 0.1 calls per 10,000 possible variants. The approach represents a significant performance improvement compared to molecular barcoding methods and does not require changing molecular reagents. PMID:29630678
van Brunschot, Sharon L.; Bergervoet, Jan H. W.; Pagendam, Daniel E.; de Weerdt, Marjanne; Geering, Andrew D. W.; Drenth, André; van der Vlugt, René A. A.
2014-01-01
Efficient and reliable diagnostic tools for the routine indexing and certification of clean propagating material are essential for the management of pospiviroid diseases in horticultural crops. This study describes the development of a true multiplexed diagnostic method for the detection and identification of all nine currently recognized pospiviroid species in one assay using Luminex bead-based suspension array technology. In addition, a new data-driven, statistical method is presented for establishing thresholds for positivity for individual assays within multiplexed arrays. When applied to the multiplexed array data generated in this study, the new method was shown to have better control of false positives and false negative results than two other commonly used approaches for setting thresholds. The 11-plex Luminex MagPlex-TAG pospiviroid array described here has a unique hierarchical assay design, incorporating a near-universal assay in addition to nine species-specific assays, and a co-amplified plant internal control assay for quality assurance purposes. All assays of the multiplexed array were shown to be 100% specific, sensitive and reproducible. The multiplexed array described herein is robust, easy to use, displays unambiguous results and has strong potential for use in routine pospiviroid indexing to improve disease management strategies. PMID:24404188
Mass detection with digitized screening mammograms by using Gabor features
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2007-03-01
Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.
Strickland, Erin C; Geer, M Ariel; Hong, Jiyong; Fitzgerald, Michael C
2014-01-01
Detection and quantitation of protein-ligand binding interactions is important in many areas of biological research. Stability of proteins from rates of oxidation (SPROX) is an energetics-based technique for identifying the proteins targets of ligands in complex biological mixtures. Knowing the false-positive rate of protein target discovery in proteome-wide SPROX experiments is important for the correct interpretation of results. Reported here are the results of a control SPROX experiment in which chemical denaturation data is obtained on the proteins in two samples that originated from the same yeast lysate, as would be done in a typical SPROX experiment except that one sample would be spiked with the test ligand. False-positive rates of 1.2-2.2% and <0.8% are calculated for SPROX experiments using Q-TOF and Orbitrap mass spectrometer systems, respectively. Our results indicate that the false-positive rate is largely determined by random errors associated with the mass spectral analysis of the isobaric mass tag (e.g., iTRAQ®) reporter ions used for peptide quantitation. Our results also suggest that technical replicates can be used to effectively eliminate such false positives that result from this random error, as is demonstrated in a SPROX experiment to identify yeast protein targets of the drug, manassantin A. The impact of ion purity in the tandem mass spectral analyses and of background oxidation on the false-positive rate of protein target discovery using SPROX is also discussed.
Larrabee, Glenn J
2014-01-01
Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.
ROC-ing along: Evaluation and interpretation of receiver operating characteristic curves.
Carter, Jane V; Pan, Jianmin; Rai, Shesh N; Galandiuk, Susan
2016-06-01
It is vital for clinicians to understand and interpret correctly medical statistics as used in clinical studies. In this review, we address current issues and focus on delivering a simple, yet comprehensive, explanation of common research methodology involving receiver operating characteristic (ROC) curves. ROC curves are used most commonly in medicine as a means of evaluating diagnostic tests. Sample data from a plasma test for the diagnosis of colorectal cancer were used to generate a prediction model. These are actual, unpublished data that have been used to describe the calculation of sensitivity, specificity, positive predictive and negative predictive values, and accuracy. The ROC curves were generated to determine the accuracy of this plasma test. These curves are generated by plotting the sensitivity (true-positive rate) on the y axis and 1 - specificity (false-positive rate) on the x axis. Curves that approach closest to the coordinate (x = 0, y = 1) are more highly predictive, whereas ROC curves that lie close to the line of equality indicate that the result is no better than that obtained by chance. The optimum sensitivity and specificity can be determined from the graph as the point where the minimum distance line crosses the ROC curve. This point corresponds to the Youden index (J), a function of sensitivity and specificity used commonly to rate diagnostic tests. The area under the curve is used to quantify the overall ability of a test to discriminate between 2 outcomes. By following these simple guidelines, interpretation of ROC curves will be less difficult and they can then be interpreted more reliably when writing, reviewing, or analyzing scientific papers. Copyright © 2016 Elsevier Inc. All rights reserved.
Speich, Benjamin; Ali, Said M; Ame, Shaali M; Albonico, Marco; Utzinger, Jürg; Keiser, Jennifer
2015-02-05
An accurate diagnosis of soil-transmitted helminthiasis is important for individual patient management, for drug efficacy evaluation and for monitoring control programmes. The Kato-Katz technique is the most widely used method detecting soil-transmitted helminth eggs in faecal samples. However, detailed analyses of quality control, including false-positive and faecal egg count (FEC) estimates, have received little attention. Over a 3-year period, within the frame of a series of randomised controlled trials conducted in Pemba, United Republic of Tanzania, 10% of randomly selected Kato-Katz thick smears were re-read for Trichuris trichiura and Ascaris lumbricoides eggs. In case of discordant result (i.e. positive versus negative) the slides were re-examined a third time. A result was assumed to be false-positive or false-negative if the result from the initial reading did not agree with the quality control as well as the third reading. We also evaluated the general agreement in FECs between the first and second reading, according to internal and World Health Organization (WHO) guidelines. From the 1,445 Kato-Katz thick smears subjected to quality control, 1,181 (81.7%) were positive for T. trichiura and 290 (20.1%) were positive for A. lumbricoides. During quality control, very low rates of false-positive results were observed; 0.35% (n = 5) for T. trichiura and 0.28% (n = 4) for A. lumbricoides. False-negative readings of Kato-Katz thick smears were obtained in 28 (1.94%) and 6 (0.42%) instances for T. trichiura and A. lumbricoides, respectively. A high frequency of discordant results in FECs was observed (i.e. 10.0-23.9% for T. trichiura, and 9.0-11.4% for A. lumbricoides). Our analyses show that the rate of false-positive diagnoses of soil-transmitted helminths is low. As the probability of false-positive results increases after examination of multiple stool samples from a single individual, the potential influence of false-positive results on epidemiological studies and anthelminthic drug efficacy studies should be determined. Existing WHO guidelines for quality control might be overambitious and might have to be revised, specifically with regard to handling disagreements in FECs.
Optimizing the TESS Planet Finding Pipeline
NASA Astrophysics Data System (ADS)
Chitamitara, Aerbwong; Smith, Jeffrey C.; Tenenbaum, Peter; TESS Science Processing Operations Center
2017-10-01
The Transiting Exoplanet Survey Satellite (TESS) is a new NASA planet finding all-sky survey that will observe stars within 200 light years and 10-100 times brighter than that of the highly successful Kepler mission. TESS is expected to detect ~1000 planets smaller than Neptune and dozens of Earth size planets. As in the Kepler mission, the Science Processing Operations Center (SPOC) processing pipeline at NASA Ames Research center is tasked with calibrating the raw pixel data, generating systematic error corrected light curves and then detecting and validating transit signals. The Transiting Planet Search (TPS) component of the pipeline must be modified and tuned for the new data characteristics in TESS. For example, due to each sector being viewed for as little as 28 days, the pipeline will be identifying transiting planets based on a minimum of two transit signals rather than three, as in the Kepler mission. This may result in a significantly higher false positive rate. The study presented here is to measure the detection efficiency of the TESS pipeline using simulated data. Transiting planets identified by TPS are compared to transiting planets from the simulated transit model using the measured epochs, periods, transit durations and the expected detection statistic of injected transit signals (expected MES). From the comparisons, the recovery and false positive rates of TPS is measured. Measurements of recovery in TPS are then used to adjust TPS configuration parameters to maximize the planet recovery rate and minimize false detections. The improvements in recovery rate between initial TPS conditions and after various adjustments will be presented and discussed.
Hwang, Sang Mee; Lee, Ki Chan; Lee, Min Seob; Park, Kyoung Un
2018-01-01
Transition to next generation sequencing (NGS) for BRCA1 / BRCA2 analysis in clinical laboratories is ongoing but different platforms and/or data analysis pipelines give different results resulting in difficulties in implementation. We have evaluated the Ion Personal Genome Machine (PGM) Platforms (Ion PGM, Ion PGM Dx, Thermo Fisher Scientific) for the analysis of BRCA1 /2. The results of Ion PGM with OTG-snpcaller, a pipeline based on Torrent mapping alignment program and Genome Analysis Toolkit, from 75 clinical samples and 14 reference DNA samples were compared with Sanger sequencing for BRCA1 / BRCA2 . Ten clinical samples and 14 reference DNA samples were additionally sequenced by Ion PGM Dx with Torrent Suite. Fifty types of variants including 18 pathogenic or variants of unknown significance were identified from 75 clinical samples and known variants of the reference samples were confirmed by Sanger sequencing and/or NGS. One false-negative results were present for Ion PGM/OTG-snpcaller for an indel variant misidentified as a single nucleotide variant. However, eight discordant results were present for Ion PGM Dx/Torrent Suite with both false-positive and -negative results. A 40-bp deletion, a 4-bp deletion and a 1-bp deletion variant was not called and a false-positive deletion was identified. Four other variants were misidentified as another variant. Ion PGM/OTG-snpcaller showed acceptable performance with good concordance with Sanger sequencing. However, Ion PGM Dx/Torrent Suite showed many discrepant results not suitable for use in a clinical laboratory, requiring further optimization of the data analysis for calling variants.
Nandipati, Kalyana C; Allamaneni, Shyam; Kakarla, Ravindra; Wong, Alfredo; Richards, Neil; Satterfield, James; Turner, James W; Sung, Kae-Jae
2011-05-01
Early identification of pneumothorax is crucial to reduce the mortality in critically injured patients. The objective of our study is to investigate the utility of surgeon performed extended focused assessment with sonography for trauma (EFAST) in the diagnosis of pneumothorax. We prospectively analysed 204 trauma patients in our level I trauma center over a period of 12 (06/2007-05/2008) months in whom EFAST was performed. The patients' demographics, type of injury, clinical examination findings (decreased air entry), CXR, EFAST and CT scan findings were entered into the data base. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) were calculated. Of 204 patients (mean age--43.01+/-19.5 years, sex--male 152, female 52) 21 (10.3%) patients had pneumothorax. Of 21 patients who had pneumothorax 12 were due to blunt trauma and 9 were due to penetrating trauma. The diagnosis of pneumothorax in 204 patients demonstrated the following: clinical examination was positive in 17 patients (true positive in 13/21, 62%; 4 were false positive and 8 were false negative), CXR was positive in 16 (true positive in 15/19, 79%; 1 false positive, 4 missed and 2 CXR not performed before chest tube) patients and EFAST was positive in 21 patients (20 were true positive [95.2%], 1 false positive and 1 false negative). In diagnosing pneumothorax EFAST has significantly higher sensitivity compared to the CXR (P=0.02). Surgeon performed trauma room extended FAST is simple and has higher sensitivity compared to the chest X-ray and clinical examination in detecting pneumothorax. Published by Elsevier Ltd.
Consensus model for identification of novel PI3K inhibitors in large chemical library.
Liew, Chin Yee; Ma, Xiao Hua; Yap, Chun Wei
2010-02-01
Phosphoinositide 3-kinases (PI3Ks) inhibitors have treatment potential for cancer, diabetes, cardiovascular disease, chronic inflammation and asthma. A consensus model consisting of three base classifiers (AODE, kNN, and SVM) trained with 1,283 positive compounds (PI3K inhibitors), 16 negative compounds (PI3K non-inhibitors) and 64,078 generated putative negatives was developed for predicting compounds with PI3K inhibitory activity of IC(50) < or = 10 microM. The consensus model has an estimated false positive rate of 0.75%. Nine novel potential inhibitors were identified using the consensus model and several of these contain structural features that are consistent with those found to be important for PI3K inhibitory activities. An advantage of the current model is that it does not require knowledge of 3D structural information of the various PI3K isoforms, which is not readily available for all isoforms.
Consensus model for identification of novel PI3K inhibitors in large chemical library
NASA Astrophysics Data System (ADS)
Liew, Chin Yee; Ma, Xiao Hua; Yap, Chun Wei
2010-02-01
Phosphoinositide 3-kinases (PI3Ks) inhibitors have treatment potential for cancer, diabetes, cardiovascular disease, chronic inflammation and asthma. A consensus model consisting of three base classifiers (AODE, kNN, and SVM) trained with 1,283 positive compounds (PI3K inhibitors), 16 negative compounds (PI3K non-inhibitors) and 64,078 generated putative negatives was developed for predicting compounds with PI3K inhibitory activity of IC50 ≤ 10 μM. The consensus model has an estimated false positive rate of 0.75%. Nine novel potential inhibitors were identified using the consensus model and several of these contain structural features that are consistent with those found to be important for PI3K inhibitory activities. An advantage of the current model is that it does not require knowledge of 3D structural information of the various PI3K isoforms, which is not readily available for all isoforms.
Finding Direction in the Search for Selection.
Thiltgen, Grant; Dos Reis, Mario; Goldstein, Richard A
2017-01-01
Tests for positive selection have mostly been developed to look for diversifying selection where change away from the current amino acid is often favorable. However, in many cases we are interested in directional selection where there is a shift toward specific amino acids, resulting in increased fitness in the species. Recently, a few methods have been developed to detect and characterize directional selection on a molecular level. Using the results of evolutionary simulations as well as HIV drug resistance data as models of directional selection, we compare two such methods with each other, as well as against a standard method for detecting diversifying selection. We find that the method to detect diversifying selection also detects directional selection under certain conditions. One method developed for detecting directional selection is powerful and accurate for a wide range of conditions, while the other can generate an excessive number of false positives.
Detection of Sickle Cell Hemoglobin in Haiti by Genotyping and Hemoglobin Solubility Tests
Carter, Tamar E.; von Fricken, Michael; Romain, Jean R.; Memnon, Gladys; St. Victor, Yves; Schick, Laura; Okech, Bernard A.; Mulligan, Connie J.
2014-01-01
Sickle cell disease is a growing global health concern because infants born with the disorder in developing countries are now surviving longer with little access to diagnostic and management options. In Haiti, the current state of sickle cell disease/trait in the population is unclear. To inform future screening efforts in Haiti, we assayed sickle hemoglobin mutations using traditional hemoglobin solubility tests (HST) and add-on techniques, which incorporated spectrophotometry and insoluble hemoglobin separation. We also generated genotype data as a metric for HST performance. We found 19 of 202 individuals screened with HST were positive for sickle hemoglobin, five of whom did not carry the HbS allele. We show that spectrophotometry and insoluble hemoglobin separation add-on techniques could resolve false positives associated with the traditional HST approach, with some limitations. We also discuss the incorporation of insoluble hemoglobin separation observation with HST in suboptimal screening settings like Haiti. PMID:24957539
When good news is bad news: psychological impact of false positive diagnosis of HIV.
Bhattacharya, Rahul; Barton, Simon; Catalan, Jose
2008-05-01
HIV testing is known to be stressful, however the impact of false positive HIV results on individuals is not well documented. This is a series of four case who developed psychological difficulties and psychiatric morbidities after being informed they had been misdiagnosed with HIV-positive status. We look into documented cases of misdiagnosis and potential risks of misdiagnosis. The case series highlights the implications a false diagnosis HIV-positive status can have, even when the diagnosis is rectified. Impact of misdiagnosis of HIV can lead to psychosocial difficulties and psychiatric morbidity, have public health and epidemiological implications and can lead to medico-legal conflict. This further reiterates the importance of HIV testing carried out ethically and sensitively, and in line with guidelines, respecting confidentiality and consent, and offering counselling pre-test and post-test, being mindful of the reality of erroneous and false positive HIV test results. The implications of misdiagnosis are for the individual, their partners and social contacts, as well as for the community.
Zhu, Xiaolei; Mitchell, Julie C
2011-09-01
Hot spots constitute a small fraction of protein-protein interface residues, yet they account for a large fraction of the binding affinity. Based on our previous method (KFC), we present two new methods (KFC2a and KFC2b) that outperform other methods at hot spot prediction. A number of improvements were made in developing these new methods. First, we created a training data set that contained a similar number of hot spot and non-hot spot residues. In addition, we generated 47 different features, and different numbers of features were used to train the models to avoid over-fitting. Finally, two feature combinations were selected: One (used in KFC2a) is composed of eight features that are mainly related to solvent accessible surface area and local plasticity; the other (KFC2b) is composed of seven features, only two of which are identical to those used in KFC2a. The two models were built using support vector machines (SVM). The two KFC2 models were then tested on a mixed independent test set, and compared with other methods such as Robetta, FOLDEF, HotPoint, MINERVA, and KFC. KFC2a showed the highest predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.85); however, the false positive rate was somewhat higher than for other models. KFC2b showed the best predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.62) among all methods other than KFC2a, and the False Positive Rate (FPR = 0.15) was comparable with other highly predictive methods. Copyright © 2011 Wiley-Liss, Inc.
Buton, Leckzinscka; Morel, Olivier; Gault, Patricia; Illouz, Frédéric; Rodien, Patrice; Rohmer, Vincent
2013-07-01
Iodine-131 (I-131) whole-body scan (WBS) plays an important role in the management of patients with differentiated thyroid carcinoma (DTC), to detect normal thyroid remnants and recurrent or metastatic disease. A focus of I-131 accumulation outside the thyroid bed and the areas of physiological uptake is strongly suggestive of a distant functioning metastasis. However, many false-positive I-131 WBS findings have been reported in the literature. We describe a series of 11 personal cases of patients with DTC, collected from 1992 to 2011, in whom diagnostic or post-treatment WBS showed false-positive retention of I-131 in various locations. False-positive accumulations of I-131 on WBS may be classified according to the underlying pathophysiological mechanisms: external and internal contaminations by body secretions, ectopic normal thyroid and gastric tissues, inflammatory and infectious diseases, benign and malignant tumors, cysts and effusions of serous cavities, thymic uptake, and other non classified causes. Clinicians must be aware of possible false-positive findings to avoid misinterpretations of the I-131 WBS, which could lead to inappropriate treatments. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Gas insufflation of minimal preparation CT of the colon reduces false-positives
Slater, A; North, M; Hart, M; Ferrett, C
2012-01-01
Objectives Minimal preparation CT of the colon (MPCT colon) is used for investigation of suspected colorectal cancer in frail and/or elderly patients who would be expected to tolerate laxative bowel preparation poorly. Although it has good sensitivity for colorectal cancer it has a poor specificity. We wished to investigate whether distension of the colon with carbon dioxide alone would reduce the number of false-positives, but without making the test arduous or excessively uncomfortable. Methods 134 patients were recruited and underwent MPCT colon with gas insufflation and antispasmodics. Results were compared with a cohort of 134 patients undergoing standard protocol MPCT colon. The numbers of false-positives were compared, as was reader confidence. All trial patients were given a questionnaire documenting their experience. Results The number of false-positives was 15% in the control group and 5% in the trial group; this difference was statistically significant, (p=0.01). Reader confidence was increased in the trial group. Patient tolerance was good, with 95% saying they would have the test again. Conclusion Use of gas insufflation and antispasmodics reduces the false-positives from 15% to 5% without adversely affecting patient tolerance. PMID:21224295
False memories, but not false beliefs, affect implicit attitudes for food preferences.
Howe, David; Anderson, Rachel J; Dewhurst, Stephen A
2017-09-01
Previous studies have found that false memories and false beliefs of childhood experiences can have attitudinal consequences. Previous studies have, however, focused exclusively on explicit attitude measures without exploring whether implicit attitudes are similarly affected. Using a false feedback/imagination inflation paradigm, false memories and beliefs of enjoying a certain food as a child were elicited in participants, and their effects were assessed using both explicit attitude measures (self-report questionnaires) and implicit measures (a Single-Target Implicit Association Test). Positive changes in explicit attitudes were observed both in participants with false memories and participants with false beliefs. In contrast, only participants with false memories exhibited more positive implicit attitudes. The findings are discussed in terms of theories of explicit and implicit attitudes. Copyright © 2017 Elsevier B.V. All rights reserved.
Positional bias in variant calls against draft reference assemblies.
Briskine, Roman V; Shimizu, Kentaro K
2017-03-28
Whole genome resequencing projects may implement variant calling using draft reference genomes assembled de novo from short-read libraries. Despite lower quality of such assemblies, they allowed researchers to extend a wide range of population genetic and genome-wide association analyses to non-model species. As the variant calling pipelines are complex and involve many software packages, it is important to understand inherent biases and limitations at each step of the analysis. In this article, we report a positional bias present in variant calling performed against draft reference assemblies constructed from de Bruijn or string overlap graphs. We assessed how frequently variants appeared at each position counted from ends of a contig or scaffold sequence, and discovered unexpectedly high number of variants at the positions related to the length of either k-mers or reads used for the assembly. We detected the bias in both publicly available draft assemblies from Assemblathon 2 competition as well as in the assemblies we generated from our simulated short-read data. Simulations confirmed that the bias causing variants are predominantly false positives induced by reads from spatially distant repeated sequences. The bias is particularly strong in contig assemblies. Scaffolding does not eliminate the bias but tends to mitigate it because of the changes in variants' relative positions and alterations in read alignments. The bias can be effectively reduced by filtering out the variants that reside in repetitive elements. Draft genome sequences generated by several popular assemblers appear to be susceptible to the positional bias potentially affecting many resequencing projects in non-model species. The bias is inherent to the assembly algorithms and arises from their particular handling of repeated sequences. It is recommended to reduce the bias by filtering especially if higher-quality genome assembly cannot be achieved. Our findings can help other researchers to improve the quality of their variant data sets and reduce artefactual findings in downstream analyses.
Simmons, Joseph P; Nelson, Leif D; Simonsohn, Uri
2011-11-01
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
Diagnosing periprosthetic infection: false-positive intraoperative Gram stains.
Oethinger, Margret; Warner, Debra K; Schindler, Susan A; Kobayashi, Hideo; Bauer, Thomas W
2011-04-01
Intraoperative Gram stains have a reported low sensitivity but high specificity when used to help diagnose periprosthetic infections. In early 2008, we recognized an unexpectedly high frequency of apparent false-positive Gram stains from revision arthroplasties. The purpose of this report is to describe the cause of these false-positive test results. We calculated the sensitivity and specificity of all intraoperative Gram stains submitted from revision arthroplasty cases during a 3-month interval using microbiologic cultures of the same samples as the gold standard. Methods of specimen harvesting, handling, transport, distribution, specimen processing including tissue grinding/macerating, Gram staining, and interpretation were studied. After a test modification, results of specimens were prospectively collected for a second 3-month interval, and the sensitivity and specificity of intraoperative Gram stains were calculated. The retrospective review of 269 Gram stains submitted from revision arthroplasties indicated historic sensitivity and specificity values of 23% and 92%, respectively. Systematic analysis of all steps of the procedure identified Gram-stained but nonviable bacteria in commercial broth reagents used as diluents for maceration of periprosthetic membranes before Gram staining and culture. Polymerase chain reaction and sequencing showed mixed bacterial DNA. Evaluation of 390 specimens after initiating standardized Millipore filtering of diluent fluid revealed a reduced number of positive Gram stains, yielding 9% sensitivity and 99% specificity. Clusters of false-positive Gram stains have been reported in other clinical conditions. They are apparently rare related to diagnosing periprosthetic infections but have severe consequences if used to guide treatment. Even occasional false-positive Gram stains should prompt review of laboratory methods. Our observations implicate dead bacteria in microbiologic reagents as potential sources of false-positive Gram stains.
Risk of Breast Cancer in Women with False-Positive Results according to Mammographic Features.
Castells, Xavier; Torá-Rocamora, Isabel; Posso, Margarita; Román, Marta; Vernet-Tomas, Maria; Rodríguez-Arana, Ana; Domingo, Laia; Vidal, Carmen; Baré, Marisa; Ferrer, Joana; Quintana, María Jesús; Sánchez, Mar; Natal, Carmen; Espinàs, Josep A; Saladié, Francina; Sala, María
2016-08-01
Purpose To assess the risk of breast cancer in women with false-positive screening results according to radiologic classification of mammographic features. Materials and Methods Review board approval was obtained, with waiver of informed consent. This retrospective cohort study included 521 200 women aged 50-69 years who underwent screening as part of the Spanish Breast Cancer Screening Program between 1994 and 2010 and who were observed until December 2012. Cox proportional hazards regression analysis was used to estimate the age-adjusted hazard ratio (HR) of breast cancer and the 95% confidence interval (CI) in women with false-positive mammograms as compared with women with negative mammograms. Separate models were adjusted for screen-detected and interval cancers and for screen-film and digital mammography. Time without a breast cancer diagnosis was plotted by using Kaplan-Meier curves. Results When compared with women with negative mammograms, the age-adjusted HR of cancer in women with false-positive results was 1.84 (95% CI: 1.73, 1.95; P < .001). The risk was higher in women who had calcifications, whether they were (HR, 2.73; 95% CI: 2.28, 3.28; P < .001) or were not (HR, 2.24; 95% CI: 2.02, 2.48; P < .001) associated with masses. Women in whom mammographic features showed changes in subsequent false-positive results were those who had the highest risk (HR, 9.13; 95% CI: 8.28, 10.07; P < .001). Conclusion Women with false-positive results had an increased risk of breast cancer, particularly women who had calcifications at mammography. Women who had more than one examination with false-positive findings and in whom the mammographic features changed over time had a highly increased risk of breast cancer. Previous mammographic features might yield useful information for further risk-prediction models and personalized follow-up screening protocols. (©) RSNA, 2016 Online supplemental material is available for this article.
Idelevich, Evgeny A.; Grunewald, Camilla M.; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding. PMID:25489741
External Quality Assessment for Avian Influenza A (H7N9) Virus Detection Using Armored RNA
Sun, Yu; Jia, Tingting; Sun, Yanli; Han, Yanxi; Wang, Lunan; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong
2013-01-01
An external quality assessment (EQA) program for the molecular detection of avian influenza A (H7N9) virus was implemented by the National Center for Clinical Laboratories (NCCL) of China in June 2013. Virus-like particles (VLPs) that contained full-length RNA sequences of the hemagglutinin (HA), neuraminidase (NA), matrix protein (MP), and nucleoprotein (NP) genes from the H7N9 virus (armored RNAs) were constructed. The EQA panel, comprising 6 samples with different concentrations of armored RNAs positive for H7N9 viruses and four H7N9-negative samples (including one sample positive for only the MP gene of the H7N9 virus), was distributed to 79 laboratories in China that carry out the molecular detection of H7N9 viruses. The overall performances of the data sets were classified according to the results for the H7 and N9 genes. Consequently, we received 80 data sets (one participating group provided two sets of results) which were generated using commercial (n = 60) or in-house (n = 17) reverse transcription-quantitative PCR (qRT-PCR) kits and a commercial assay that employed isothermal amplification method (n = 3). The results revealed that the majority (82.5%) of the data sets correctly identified the H7N9 virus, while 17.5% of the data sets needed improvements in their diagnostic capabilities. These “improvable” data sets were derived mostly from false-negative results for the N9 gene at relatively low concentrations. The false-negative rate was 5.6%, and the false-positive rate was 0.6%. In addition, we observed varied diagnostic capabilities between the different commercially available kits and the in-house-developed assays, with the assay manufactured by BioPerfectus Technologies (Jiangsu, China) performing better than the others. Overall, the majority of laboratories have reliable diagnostic capacities for the detection of H7N9 virus. PMID:24088846
External quality assessment for Avian Influenza A (H7N9) Virus detection using armored RNA.
Sun, Yu; Jia, Tingting; Sun, Yanli; Han, Yanxi; Wang, Lunan; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong; Li, Jinming
2013-12-01
An external quality assessment (EQA) program for the molecular detection of avian influenza A (H7N9) virus was implemented by the National Center for Clinical Laboratories (NCCL) of China in June 2013. Virus-like particles (VLPs) that contained full-length RNA sequences of the hemagglutinin (HA), neuraminidase (NA), matrix protein (MP), and nucleoprotein (NP) genes from the H7N9 virus (armored RNAs) were constructed. The EQA panel, comprising 6 samples with different concentrations of armored RNAs positive for H7N9 viruses and four H7N9-negative samples (including one sample positive for only the MP gene of the H7N9 virus), was distributed to 79 laboratories in China that carry out the molecular detection of H7N9 viruses. The overall performances of the data sets were classified according to the results for the H7 and N9 genes. Consequently, we received 80 data sets (one participating group provided two sets of results) which were generated using commercial (n = 60) or in-house (n = 17) reverse transcription-quantitative PCR (qRT-PCR) kits and a commercial assay that employed isothermal amplification method (n = 3). The results revealed that the majority (82.5%) of the data sets correctly identified the H7N9 virus, while 17.5% of the data sets needed improvements in their diagnostic capabilities. These "improvable" data sets were derived mostly from false-negative results for the N9 gene at relatively low concentrations. The false-negative rate was 5.6%, and the false-positive rate was 0.6%. In addition, we observed varied diagnostic capabilities between the different commercially available kits and the in-house-developed assays, with the assay manufactured by BioPerfectus Technologies (Jiangsu, China) performing better than the others. Overall, the majority of laboratories have reliable diagnostic capacities for the detection of H7N9 virus.
Idelevich, Evgeny A; Grunewald, Camilla M; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥ 1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤ 1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi
2014-03-01
Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
False positives in psychiatric diagnosis: implications for human freedom.
Wakefield, Jerome C
2010-02-01
Current symptom-based DSM and ICD diagnostic criteria for mental disorders are prone to yielding false positives because they ignore the context of symptoms. This is often seen as a benign flaw because problems of living and emotional suffering, even if not true disorders, may benefit from support and treatment. However, diagnosis of a disorder in our society has many ramifications not only for treatment choice but for broader social reactions to the diagnosed individual. In particular, mental disorders impose a sick role on individuals and place a burden upon them to change; thus, disorders decrease the level of respect and acceptance generally accorded to those with even annoying normal variations in traits and features. Thus, minimizing false positives is important to a pluralistic society. The harmful dysfunction analysis of disorder is used to diagnose the sources of likely false positives, and propose potential remedies to the current weaknesses in the validity of diagnostic criteria.
False positive malaria rapid diagnostic test in returning traveler with typhoid fever.
Meatherall, Bonnie; Preston, Keith; Pillai, Dylan R
2014-07-09
Rapid diagnostic tests play a pivotal role in the early diagnosis of malaria where microscopy or polymerase chain reaction are not immediately available. We report the case of a 39 year old traveler to Canada who presented with fever, headache, and abdominal pain after visiting friends and relatives in India. While in India, the individual was not ill and had no signs or symptoms of malaria. Laboratory testing upon his return to Canada identified a false positive malaria rapid diagnostic (BinaxNOW® malaria) result for P. falciparum with coincident Salmonella Typhi bacteraemia without rheumatoid or autoimmune factors. Rapid diagnostic test false positivity for malaria coincided with the presence or absence of Salmonella Typhi in the blood. Clinicians should be aware that Salmonella Typhi infection may result in a false positive malaria rapid diagnostic test. The mechanism of this cross-reactivity is not clear.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
Storbeck, Justin
2013-01-01
I investigated whether negative affective states enhance encoding of and memory for item-specific information reducing false memories. Positive, negative, and neutral moods were induced, and participants then completed a Deese-Roediger-McDermott (DRM) false-memory task. List items were presented in unique spatial locations or unique fonts to serve as measures for item-specific encoding. The negative mood conditions had more accurate memories for item-specific information, and they also had fewer false memories. The final experiment used a manipulation that drew attention to distinctive information, which aided learning for DRM words, but also promoted item-specific encoding. For the condition that promoted item-specific encoding, false memories were reduced for positive and neutral mood conditions to a rate similar to that of the negative mood condition. These experiments demonstrated that negative affective cues promote item-specific processing reducing false memories. People in positive and negative moods encode events differently creating different memories for the same event.
True detection limits in an experimental linearly heteroscedastic system. Part 1
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.
Support Vector Machine algorithm for regression and classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Chenggang; Zavaljevski, Nela
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. A decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by themore » capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
Izumida, Toshihide; Sakata, Hidenao; Nakamura, Masahiko; Hayashibara, Yumiko; Inasaki, Noriko; Inahata, Ryo; Hasegawa, Sumiyo; Takizawa, Takenori; Kaya, Hiroyasu
2016-01-01
An outbreak of dengue fever occurred in Japan in August 2014. We herein report the case of a 63-year-old man who presented with a persistent fever in September 2014. Acute parvovirus B19 infection led to a false positive finding of dengue fever on a rapid diagnostic test (Panbio Dengue Duo Cassette(TM)). To the best of our knowledge, there are no previous reports of a false positive result for dengue IgM with the dengue rapid diagnostic test. We believe that epidemiological information on the prevalence of parvovirus B19 is useful for guiding the interpretation of a positive result with the dengue rapid diagnostic test.
Goldenberg, S D; Cliff, P R; Smith, S; Milner, M; French, G L
2010-01-01
Current diagnosis of Clostridium difficile infection (CDI) relies upon detection of toxins A/B in stool by enzyme immunoassay [EIA(A/B)]. This strategy is unsatisfactory because it has a low sensitivity resulting in significant false negatives. We investigated the performance of a two-step algorithm for diagnosis of CDI using detection of glutamate dehydrogenase (GDH). GDH-positive samples were tested for C. difficile toxin B gene (tcdB) by polymerase chain reaction (PCR). The performance of the two-step protocol was compared with toxin detection by the Meridian Premier EIA kit in 500 consecutive stool samples from patients with suspected CDI. The reference standard among samples that were positive by either EIA(A/B) or GDH testing was culture cytotoxin neutralisation (culture/CTN). Thirty-six (7%) of 500 samples were identified as true positives by culture/CTN. EIA(A/B) identified 14 of the positive specimens with 22 false negatives and two false positives. The two-step protocol identified 34 of the positive samples with two false positives and two false negatives. EIA(A/B) had a sensitivity of 39%, specificity of 99%, positive predictive value of 88% and negative predictive value of 95%. The two-step algorithm performed better, with corresponding values of 94%, 99%, 94% and 99% respectively. Screening for GDH before confirmation of positives by PCR is cheaper than screening all specimens by PCR and is an effective method for routine use. Current EIA(A/B) tests for CDI are of inadequate sensitivity and should be replaced; however, this may result in apparent changes in CDI rates that would need to be explained in national surveillance statistics. Copyright 2009 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-01-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose ‘Copy Number estimation by a Mixture Of PoissonS’ (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1–FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor. PMID:22302147
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-05-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose 'Copy Number estimation by a Mixture Of PoissonS' (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1-FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor.
Wavelet method for CT colonography computer-aided polyp detection.
Li, Jiang; Van Uitert, Robert; Yao, Jianhua; Petrick, Nicholas; Franaszek, Marek; Huang, Adam; Summers, Ronald M
2008-08-01
Computed tomographic colonography (CTC) computer aided detection (CAD) is a new method to detect colon polyps. Colonic polyps are abnormal growths that may become cancerous. Detection and removal of colonic polyps, particularly larger ones, has been shown to reduce the incidence of colorectal cancer. While high sensitivities and low false positive rates are consistently achieved for the detection of polyps sized 1 cm or larger, lower sensitivities and higher false positive rates occur when the goal of CAD is to identify "medium"-sized polyps, 6-9 mm in diameter. Such medium-sized polyps may be important for clinical patient management. We have developed a wavelet-based postprocessor to reduce false positives for this polyp size range. We applied the wavelet-based postprocessor to CTC CAD findings from 44 patients in whom 45 polyps with sizes of 6-9 mm were found at segmentally unblinded optical colonoscopy and visible on retrospective review of the CT colonography images. Prior to the application of the wavelet-based postprocessor, the CTC CAD system detected 33 of the polyps (sensitivity 73.33%) with 12.4 false positives per patient, a sensitivity comparable to that of expert radiologists. Fourfold cross validation with 5000 bootstraps showed that the wavelet-based postprocessor could reduce the false positives by 56.61% (p <0.001), to 5.38 per patient (95% confidence interval [4.41, 6.34]), without significant sensitivity degradation (32/45, 71.11%, 95% confidence interval [66.39%, 75.74%], p=0.1713). We conclude that this wavelet-based postprocessor can substantially reduce the false positive rate of our CTC CAD for this important polyp size range.
Deep belief networks for false alarm rejection in forward-looking ground-penetrating radar
NASA Astrophysics Data System (ADS)
Becker, John; Havens, Timothy C.; Pinar, Anthony; Schulz, Timothy J.
2015-05-01
Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener's false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.
Kim, Ko Eun; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung; Kim, Seok Hwan
2015-03-01
To investigate the rate and associated factors of false-positive diagnostic classification of ganglion cell analysis (GCA) and retinal nerve fiber layer (RNFL) maps, and characteristic false-positive patterns on optical coherence tomography (OCT) deviation maps. Prospective, cross-sectional study. A total of 104 healthy eyes of 104 normal participants. All participants underwent peripapillary and macular spectral-domain (Cirrus-HD, Carl Zeiss Meditec Inc, Dublin, CA) OCT scans. False-positive diagnostic classification was defined as yellow or red color-coded areas for GCA and RNFL maps. Univariate and multivariate logistic regression analyses were used to determine associated factors. Eyes with abnormal OCT deviation maps were categorized on the basis of the shape and location of abnormal color-coded area. Differences in clinical characteristics among the subgroups were compared. (1) The rate and associated factors of false-positive OCT maps; (2) patterns of false-positive, color-coded areas on the GCA deviation map and associated clinical characteristics. Of the 104 healthy eyes, 42 (40.4%) and 32 (30.8%) showed abnormal diagnostic classifications on any of the GCA and RNFL maps, respectively. Multivariate analysis revealed that false-positive GCA diagnostic classification was associated with longer axial length and larger fovea-disc angle, whereas longer axial length and smaller disc area were associated with abnormal RNFL maps. Eyes with abnormal GCA deviation map were categorized as group A (donut-shaped round area around the inner annulus), group B (island-like isolated area), and group C (diffuse, circular area with an irregular inner margin in either). The axial length showed a significant increasing trend from group A to C (P=0.001), and likewise, the refractive error was more myopic in group C than in groups A (P=0.015) and B (P=0.014). Group C had thinner average ganglion cell-inner plexiform layer thickness compared with other groups (group A=B>C, P=0.004). Abnormal OCT diagnostic classification should be interpreted with caution, especially in eyes with long axial lengths, large fovea-disc angles, and small optic discs. Our findings suggest that the characteristic patterns of OCT deviation map can provide useful clues to distinguish glaucomatous changes from false-positive findings. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Mobile chemical detector (AP2C+SP4E) as an aid for medical decision making in the battlefield.
Eisenkraft, Arik; Markel, Gal; Simovich, Shirley; Layish, Ido; Hoffman, Azik; Finkelstein, Arseny; Rotman, Eran; Dushnitsky, Tsvika; Krivoy, Amir
2007-09-01
The combination of the AP2C unit with the SP4E kit composes a lightweight mobile detector of chemical warfare agents (CWA), such as nerve and mustard agents, with both vapor- and liquid-sampling capabilities. This apparatus was recently introduced into our military medical units as an aid for detection of CWA on casualties. Importantly, critical information regarding the applicability in the battlefield was absent. In view of the serious consequences that might follow a proclamation of CWA recognition in battlefield, a high false-positive rate positions the utilization of this apparatus as a medical decision tool in question. We have therefore conducted a field experiment to test the false-positive rate as well as analyze possible factors leading to false-positive readings with this device. The experiment was carried out before and after a 4-day army field exercise, using a standard AP2C device, a SP4E surface sampling kit, and a specially designed medical sampling kit for casualties, intended for medical teams. Soldiers were examined at rest, after mild exercise, and after 4 days in the field. The readings with AP2C alone were compared to the combination of AP2C and SP4E and to the medical sampling kit. Various body fluids served as negative controls. Remarkably, we found a false-positive rate of 57% at rest and after mild exercise, and an even higher rate of 64% after the 4-day field exercise with the AP2C detector alone, as compared to almost no false-positive readings with the combination of AP2C and SP4E. Strikingly, the medical sampling kit has yielded numerous false-positive readings, even in normal body fluids such as blood, urine, and saliva. We therefore see no place for using the medical sampling kit due to an unaccepted high rate of false-positive readings. Finally, we have designed an algorithm that uses the entire apparatus of AP2C and SP4E as a reliable validation tool for medical triage in the setting of exposure to nerve agents in the battlefield.
False Positive and False Negative Effects on Network Attacks
NASA Astrophysics Data System (ADS)
Shang, Yilun
2018-01-01
Robustness against attacks serves as evidence for complex network structures and failure mechanisms that lie behind them. Most often, due to detection capability limitation or good disguises, attacks on networks are subject to false positives and false negatives, meaning that functional nodes may be falsely regarded as compromised by the attacker and vice versa. In this work, we initiate a study of false positive/negative effects on network robustness against three fundamental types of attack strategies, namely, random attacks (RA), localized attacks (LA), and targeted attack (TA). By developing a general mathematical framework based upon the percolation model, we investigate analytically and by numerical simulations of attack robustness with false positive/negative rate (FPR/FNR) on three benchmark models including Erdős-Rényi (ER) networks, random regular (RR) networks, and scale-free (SF) networks. We show that ER networks are equivalently robust against RA and LA only when FPR equals zero or the initial network is intact. We find several interesting crossovers in RR and SF networks when FPR is taken into consideration. By defining the cost of attack, we observe diminishing marginal attack efficiency for RA, LA, and TA. Our finding highlights the potential risk of underestimating or ignoring FPR in understanding attack robustness. The results may provide insights into ways of enhancing robustness of network architecture and improve the level of protection of critical infrastructures.
Dong, YiJie; Mao, MinJing; Zhan, WeiWei; Zhou, JianQiao; Zhou, Wei; Yao, JieJie; Hu, YunYun; Wang, Yan; Ye, TingJun
2018-06-01
Our goal was to assess the diagnostic efficacy of ultrasound (US)-guided fine-needle aspiration (FNA) of thyroid nodules according to size and US features. A retrospective correlation was made with 1745 whole thyroidectomy and hemithyroidectomy specimens with preoperative US-guided FNA results. All cases were divided into 5 groups according to nodule size (≤5, 5.1-10, 10.1-15, 15.1-20, and >20 mm). For target nodules, static images and cine clips of conventional US and color Doppler were obtained. Ultrasound images were reviewed and evaluated by two radiologists with at least 5 years US working experience without knowing the results of pathology, and then agreement was achieved. The Bethesda category I rate was higher in nodules larger than 15 mm (P < .05). The diagnostic accuracy was best in nodules of 5 to 10 mm in diameter. The sensitivity, accuracy, PPV, and LR for negative US-guided FNA results were better in nodules with a size range of 5 to 15 mm. The specificity, negative predictive value (NPV), and LR for positive results and the Youden index rose with increasing nodule size. Seventeen false-positive and 60 false-negative results were found in this study. The false-negative rate rose with increasing nodule size. However, the false-positive rate was highest in the group containing the smallest nodules. Nodules with circumscribed margins and those that were nonsolid and nonhypoechoic and had no microcalcifications correlated with Bethesda I FNA results. Nodules with circumscribed margins and those that were nonsolid, heterogeneous, and nonhypoechoic and had increased vascularity correlated with false-negative FNA results. Borders correlated with Bethesda I false-negative and false-positive FNA results. Tiny nodules (≤5 mm) with obscure borders tended to yield false-positive FNA results. Large nodules (>20 mm) with several US features tended to yield false-negative FNA results. © 2017 by the American Institute of Ultrasound in Medicine.
Otten, J D M; Fracheboud, J; den Heeten, G J; Otto, S J; Holland, R; de Koning, H J; Broeders, M J M; Verbeek, A L M
2013-10-01
Women require balanced, high-quality information when making an informed decision on screening benefits and harms before attending biennial mammographic screening. The cumulative risk of a false-positive recall and/or (small) screen-detected or interval cancer over 13 consecutive screening examinations for women aged 50 from the start of screening were estimated using data from the Nijmegen programme, the Netherlands. Women who underwent 13 successive screens in the period 1975-1976 had a 5.3% cumulative chance of a screen-detected cancer, with a 4.2% risk of at least one false-positive recall. The risk of being diagnosed with interval cancer was 3.7%. Two decades later, these estimates were 6.9%, 7.3% and 2.9%, respectively. The chance of detection of a small, favourable invasive breast cancer, anticipating a normal life-expectancy, rose from 2.3% to 3.7%. Extrapolation to digital screening mammography indicates that the proportion of false-positive results will rise to 16%. Dutch women about to participate in the screening programme can be reassured that the chance of false-positive recall in the Netherlands is relatively low. A new screening policy and improved mammography have increased the detection of an early screening carcinoma and lowering the risk of interval carcinoma.
Gagliardi, L; Chapman, I M; O'Loughlin, P; Torpy, D J
2010-04-01
The diagnosis of subclinical Cushing's syndrome (SCS) is important, but its relative rarity amongst patients with common metabolic disorders requires a simple test with a low false-positive rate. Using nocturnal salivary cortisol (NSC), which we first validated in patients with suspected and proven Cushing's syndrome, we screened 106 overweight patients with type 2 diabetes mellitus, a group at high risk of SCS and nontumoral hypothalamic-pituitary-adrenal axis perturbations. Our hypothesis was that a lower false-positive rate with NSC was likely, compared with that reported with the dexamethasone suppression test (DST) (10-20%), currently the foundation of diagnosis of SCS. No participant had clinically apparent Cushing's syndrome. Three participants had an elevated NSC but further testing excluded SCS. In this study, NSC had a lower false-positive rate (3%) than previously reported for the DST. Given the reported excellent performance of NSC in detection of hypercortisolism, the low false-positive rate in SCS suggests NSC may be superior to the DST for SCS screening. The NSC and DST should be compared directly in metabolic disorder patients; although our data suggest the patient group will need to be substantially larger to definitively determine the optimal screening test. Georg Thieme Verlag KG Stuttgart New York.
Tetteh, Ato Kwamena; Agyarko, Edward
2017-01-01
Screening results of 488 pregnant women aged 15-44 years whose blood samples had been tested on-site, using First Response® HIV 1/2, and confirmed with INNO-LIA™ HIV I/II Score were used. Of this total, 178 were reactive (HIV I, 154; HIV II, 2; and HIV I and HIV II, 22). Of the 154 HIV I-reactive samples, 104 were confirmed to be HIV I-positive and 2 were confirmed to be HIV II-positive, while 48 were confirmed to be negative [false positive rate = 17.44% (13.56-21.32)]. The two HIV II samples submitted were confirmed to be negative with the confirmatory test. For the 22 HIV I and HIV II samples, 7 were confirmed to be HIV I-positive and 1 was confirmed to be HIV I- and HIV II-positive, while 14 were confirmed to be negative. Of the 310 nonreactive samples, 6 were confirmed to be HIV I-positive and 1 was confirmed to be HIV II-positive [false negative rate = 5.79% (1.63-8.38)], while 303 were negative. False negative outcomes will remain unconfirmed, with no management options for the client. False negative rate of 5.79% requires attention, as its resultant implications on control of HIV/AIDS could be dire.
Weemhoff, M; Kluivers, K B; Govaert, B; Evers, J L H; Kessels, A G H; Baeten, C G
2013-03-01
This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography.
The effect of algorithms on copy number variant detection.
Tsuang, Debby W; Millard, Steven P; Ely, Benjamin; Chi, Peter; Wang, Kenneth; Raskind, Wendy H; Kim, Sulgi; Brkanac, Zoran; Yu, Chang-En
2010-12-30
The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.
Aly, Ibrahim; Taher, Eman E; El Nain, Gehan; El Sayed, Hoda; Mohammed, Faten A; Hamad, Rabab S; Bayoumy, Elsayed M
2018-01-01
Nanotechnology is a promising arena for generating new applications in Medicine. To successfully functionalised nanoparticles for a given biomedical application, a wide range of chemical, physical and biological factors have to be taken into account. Silica-coated nanoparticles, (SiO2NP) exhibit substantial diagnostic activity owing to their large surface to volume ratios and crystallographic surface structure. This work aimed to evaluate the advantage of bioconjugation of SiO2NP with PAb against Toxoplasma lyzate antigen (TLA) as an innovative diagnostic method for human toxoplasmosis. This cross-sectional study included 120 individuals, divided into Group I: 70 patients suspected for Toxoplasma gondii based on the presence of clinical manifestation. Group II: 30 patients harboring other parasites than T. gondii Group III: 20 apparently healthy individuals free from toxoplasmosis and other parasitic infections served as negative control. Detection of circulating Toxoplasma antigen was performed by Sandwich ELISA and Nano-sandwich ELISA on sera and pooled urine of human samples. Using Sandwich ELISA, 10 out of 70 suspected Toxoplasma-infected human serum samples showed false negative and 8 out of 30 of other parasites groups were false positive giving 85.7% sensitivity and 84.0% specificity, while the sensitivity and specificity were 78.6% and 70% respectively in urine samples. Using Nano-Sandwich ELISA, 7 out of 70 suspected Toxoplasma-infected human samples showed false negative results and the sensitivity of the assay was 90.0%, while 4 out of 30 of other parasites groups were false positive giving 92.0% specificity, while the sensitivity and specificity were 82.6% and 80% respectively in urine samples. In conclusion, our data demonstrated that loading SiO2 nanoparticles with pAb increased the sensitivity and specificity of Nano-sandwich ELISA for detection of T.gondii antigens in serum and urine samples, thus active (early) and light infections could be easily detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Reddy, T; McLaughlin, P D; Mallinson, P I; Reagan, A C; Munk, P L; Nicolaou, S; Ouellette, H A
2015-02-01
The purpose of this study is to describe our initial clinical experience with dual-energy computed tomography (DECT) virtual non-calcium (VNC) images for the detection of bone marrow (BM) edema in patients with suspected hip fracture following trauma. Twenty-five patients presented to the emergency department at a level 1 trauma center between January 1, 2011 and January 1, 2013 with clinical suspicion of hip fracture and normal radiographs were included. All CT scans were performed on a dual-source, dual-energy CT system. VNC images were generated using prototype software and were compared to regular bone reconstructions by two musculoskeletal radiologists in consensus. Radiological and/or clinical diagnosis of fracture at 30-day follow-up was used as the reference standard. Twenty-one patients were found to have DECT-VNC signs of bone marrow edema. Eighteen of these 21 patients were true positive and three were false positive. A concordant fracture was clearly seen on bone reconstruction images in 15 of the 18 true positive cases. In three cases, DECT-VNC was positive for bone marrow edema where bone reconstruction CT images were negative. Four patients demonstrated no DECT-VNC signs of bone marrow edema: two cases were true negative, two cases were false negative. When compared with the gold standard of hip fracture determined at retrospective follow-up, the sensitivity of DECT-VNC images of the hip was 90 %, specificity was 40 %, positive predictive value was 86 %, and negative predictive value was 50 %. Our initial experience would suggest that DECT-VNC is highly sensitive but poorly specific in the diagnosis of hip fractures in patients with normal radiographs. The value of DECT-VNC primarily lies in its ability to help detect fractures which may be subtle or undetectable on bone reconstruction CT images.
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis.
Collins, Tony J; Ylanko, Jarkko; Geng, Fei; Andrews, David W
2015-11-01
A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose-response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds.
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis
Collins, Tony J.; Ylanko, Jarkko; Geng, Fei
2015-01-01
Abstract A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose–response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds. PMID:26422066
Prevalence of Brucella abortus antibodies in equines of a tropical region of Mexico
Acosta-González, Rosa I.; González-Reyes, Ismael; Flores-Gutiérrez, Gerardo H.
2006-01-01
A cross-sectional study was conducted to determinate the seroprevalence rate of equine brucellosis in the state of Tamaulipas, Mexico. Serum samples from 420 equines were analyzed with the Rose Bengal test at cell concentrations of 3% (RBT-3%) and 8% (RBT-8%), and positive results were confirmed with the Rivanol test (RT). Risk factors were determined with the prevalence ratio (PR) and the use of variables generated from a questionnaire administered to the animals’ owners. Serum from 1 stallion had positive results with both the RBT-8% and the RT, for a seroprevalence rate of 0.238%. Drinking of water from a pond that was also used by cattle and dogs was the only associated risk factor for this animal (PR = 0.25). However, the results were considered false-positive, because the results for other horses in the same environmental conditions were negative. Although brucellosis is considered endemic in ruminants in the study area, the results obtained suggest that equines are not a reservoir of brucellosis and do not play an important role in the epidemiologic patterns of this disease in northeastern Mexico. PMID:17042384
Jia, Tingting; Zhang, Lei; Wang, Guojing; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong; Wang, Lunan; Li, Jinming
2015-01-01
In recent years, nucleic acid tests for detection of measles virus RNA have been widely applied in laboratories belonging to the measles surveillance system of China. An external quality assessment program was established by the National Center for Clinical Laboratories to evaluate the performance of nucleic acid tests for measles virus. The external quality assessment panel, which consisted of 10 specimens, was prepared using armored RNAs, complex of noninfectious MS2 bacteriophage coat proteins encapsulated RNA of measles virus, as measles virus surrogate controls. Conserved sequences amplified from a circulating measles virus strain or from a vaccine strain were encapsulated into these armored RNAs. Forty-one participating laboratories from 15 provinces, municipalities, or autonomous regions that currently conduct molecular detection of measles virus enrolled in the external quality assessment program, including 40 measles surveillance system laboratories and one diagnostic reagent manufacturer. Forty laboratories used commercial reverse transcription-quantitative PCR kits, with only one laboratory applying a conventional PCR method developed in-house. The results indicated that most of the participants (38/41, 92.7%) were able to accurately detect the panel with 100% sensitivity and 100% specificity. Although a wide range of commercially available kits for nucleic acid extraction and reverse transcription polymerase chain reaction were used by the participants, only two false-negative results and one false-positive result were generated; these were generated by three separate laboratories. Both false-negative results were obtained with tests performed on specimens with the lowest concentration (1.2 × 104 genomic equivalents/mL). In addition, all 18 participants from Beijing achieved 100% sensitivity and 100% specificity. Overall, we conclude that the majority of the laboratories evaluated have reliable diagnostic capacities for the detection of measles virus. PMID:26244795
Zhang, Dong; Sun, Yu; Jia, Tingting; Zhang, Lei; Wang, Guojing; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong; Wang, Lunan; Li, Jinming
2015-01-01
In recent years, nucleic acid tests for detection of measles virus RNA have been widely applied in laboratories belonging to the measles surveillance system of China. An external quality assessment program was established by the National Center for Clinical Laboratories to evaluate the performance of nucleic acid tests for measles virus. The external quality assessment panel, which consisted of 10 specimens, was prepared using armored RNAs, complex of noninfectious MS2 bacteriophage coat proteins encapsulated RNA of measles virus, as measles virus surrogate controls. Conserved sequences amplified from a circulating measles virus strain or from a vaccine strain were encapsulated into these armored RNAs. Forty-one participating laboratories from 15 provinces, municipalities, or autonomous regions that currently conduct molecular detection of measles virus enrolled in the external quality assessment program, including 40 measles surveillance system laboratories and one diagnostic reagent manufacturer. Forty laboratories used commercial reverse transcription-quantitative PCR kits, with only one laboratory applying a conventional PCR method developed in-house. The results indicated that most of the participants (38/41, 92.7%) were able to accurately detect the panel with 100% sensitivity and 100% specificity. Although a wide range of commercially available kits for nucleic acid extraction and reverse transcription polymerase chain reaction were used by the participants, only two false-negative results and one false-positive result were generated; these were generated by three separate laboratories. Both false-negative results were obtained with tests performed on specimens with the lowest concentration (1.2 × 104 genomic equivalents/mL). In addition, all 18 participants from Beijing achieved 100% sensitivity and 100% specificity. Overall, we conclude that the majority of the laboratories evaluated have reliable diagnostic capacities for the detection of measles virus.
Theurer, M E; White, B J; Larson, R L; Schroeder, T C
2015-03-01
Bovine respiratory disease is an economically important syndrome in the beef industry, and diagnostic accuracy is important for optimal disease management. The objective of this study was to determine whether improving diagnostic sensitivity or specificity was of greater economic value at varied levels of respiratory disease prevalence by using Monte Carlo simulation. Existing literature was used to populate model distributions of published sensitivity, specificity, and performance (ADG, carcass weight, yield grade, quality grade, and mortality risk) differences among calves based on clinical respiratory disease status. Data from multiple cattle feeding operations were used to generate true ranges of respiratory disease prevalence and associated mortality. Input variables were combined into a single model that calculated estimated net returns for animals by diagnostic category (true positive, false positive, false negative, and true negative) based on the prevalence, sensitivity, and specificity for each iteration. Net returns for each diagnostic category were multiplied by the proportion of animals in each diagnostic category to determine group profitability. Apparent prevalence was categorized into low (<15%) and high (≥15%) groups. For both apparent prevalence categories, increasing specificity created more rapid, positive change in net returns than increasing sensitivity. Improvement of diagnostic specificity, perhaps through a confirmatory test interpreted in series or pen-level diagnostics, can increase diagnostic value more than improving sensitivity. Mortality risk was the primary driver for net returns. The results from this study are important for determining future research priorities to analyze diagnostic techniques for bovine respiratory disease and provide a novel way for modeling diagnostic tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, Benjamin P.; Pfeiffer, Barret D.; Laverty, Todd R.
2004-08-06
The identification of sequences that control transcription in metazoans is a major goal of genome analysis. In a previous study, we demonstrated that searching for clusters of predicted transcription factor binding sites could discover active regulatory sequences, and identified 37 regions of the Drosophila melanogaster genome with high densities of predicted binding sites for five transcription factors involved in anterior-posterior embryonic patterning. Nine of these clusters overlapped known enhancers. Here, we report the results of in vivo functional analysis of 27 remaining clusters. We generated transgenic flies carrying each cluster attached to a basal promoter and reporter gene, and assayedmore » embryos for reporter gene expression. Six clusters are enhancers of adjacent genes: giant, fushi tarazu, odd-skipped, nubbin, squeeze and pdm2; three drive expression in patterns unrelated to those of neighboring genes; the remaining 18 do not appear to have enhancer activity. We used the Drosophila pseudoobscura genome to compare patterns of evolution in and around the 15 positive and 18 false-positive predictions. Although conservation of primary sequence cannot distinguish true from false positives, conservation of binding-site clustering accurately discriminates functional binding-site clusters from those with no function. We incorporated conservation of binding-site clustering into a new genome-wide enhancer screen, and predict several hundred new regulatory sequences, including 85 adjacent to genes with embryonic patterns. Measuring conservation of sequence features closely linked to function--such as binding-site clustering--makes better use of comparative sequence data than commonly used methods that examine only sequence identity.« less
Chen, Qianting; Dai, Congling; Zhang, Qianjun; Du, Juan; Li, Wen
2016-10-01
To study the prediction performance evaluation with five kinds of bioinformatics software (SIFT, PolyPhen2, MutationTaster, Provean, MutationAssessor). From own database for genetic mutations collected over the past five years, Chinese literature database, Human Gene Mutation Database, and dbSNP, 121 missense mutations confirmed by functional studies, and 121 missense mutations suspected to be pathogenic by pedigree analysis were used as positive gold standard, while 242 missense mutations with minor allele frequency (MAF)>5% in dominant hereditary diseases were used as negative gold standard. The selected mutations were predicted with the five software. Based on the results, the performance of the five software was evaluated for their sensitivity, specificity, positive predict value, false positive rate, negative predict value, false negative rate, false discovery rate, accuracy, and receiver operating characteristic curve (ROC). In terms of sensitivity, negative predictive value and false negative rate, the rank was MutationTaster, PolyPhen2, Provean, SIFT, and MutationAssessor. For specificity and false positive rate, the rank was MutationTaster, Provean, MutationAssessor, SIFT, and PolyPhen2. For positive predict value and false discovery rate, the rank was MutationTaster, Provean, MutationAssessor, PolyPhen2, and SIFT. For area under the ROC curve (AUC) and accuracy, the rank was MutationTaster, Provean, PolyPhen2, MutationAssessor, and SIFT. The prediction performance of software may be different when using different parameters. Among the five software, MutationTaster has the best prediction performance.
Manlutac, Anna Liza M; Giesick, Jill S; McVay, Patricia A
2013-12-01
HIV screening assays have gone through several generations of development in an effort to narrow the "window period" of detection. Utilizing a fourth generation HIV screening assay has the potential to detect earlier HIV infection, thus reducing HIV-1 transmission. To identify acute infections to decrease HIV transmission in San Diego County. Serum specimens were collected from clients seen by multiple submitters in San Diego County. All acceptable specimens were screened using the 4th Gen Combo Assay. Initially reactive specimens were repeated in duplicate and if repeatedly reactive, were confirmed by HIV-1 Immunofluorescent Antibody Assay (IFA). IFA negative/inconclusive specimens were sent for HIV-1 NAT and HIV-2 antibody testing to referral laboratories. BioRad Multispot HIV-1/HIV-2 Rapid Test was also performed on a subset of specimens. Of 14,559 specimens received in 20 months, 14,517 specimens were tested. Of the 14,517 specimens that were tested, a total of 279 (1.9%) specimens were CIA repeatedly reactive and 240 of the 279 confirmed by HIV-1 IFA. Thirty-nine gave IFA negative/inconclusive result and 30 were further tested for HIV-1 NAT and 36 for HIV-2 antibody. Thirteen specimens were considered false positives by CIA and 17 specimens were classified as acute infections. Eleven of 39 IFA negative/inconclusive specimens were further tested by Multispot. Five of the 11 were positive by Multispot. The fourth generation Abbott ARCHITECT HIV Ag/Ab Combo Assay identified 17 patients who may have been missed by the prior HIV-1 screening assay used at San Diego County Public Health Laboratory. Copyright © 2013 Elsevier B.V. All rights reserved.
Sakurai, Fuminori; Narii, Nobuhiro; Tomita, Kyoko; Togo, Shinsaku; Takahashi, Kazuhisa; Machitani, Mitsuhiro; Tachibana, Masashi; Ouchi, Masaaki; Katagiri, Nobuyoshi; Urata, Yasuo; Fujiwara, Toshiyoshi; Mizuguchi, Hiroyuki
2016-01-01
Circulating tumor cells (CTCs) are promising biomarkers in several cancers, and thus methods and apparatuses for their detection and quantification in the blood have been actively pursued. A novel CTC detection system using a green fluorescence protein (GFP)–expressing conditionally replicating adenovirus (Ad) (rAd-GFP) was recently developed; however, there is concern about the production of false-positive cells (GFP-positive normal blood cells) when using rAd-GFP, particularly at high titers. In addition, CTCs lacking or expressing low levels of coxsackievirus–adenovirus receptor (CAR) cannot be detected by rAd-GFP, because rAd-GFP is constructed based on Ad serotype 5, which recognizes CAR. In order to suppress the production of false-positive cells, sequences perfectly complementary to blood cell–specific microRNA, miR-142-3p, were incorporated into the 3′-untranslated region of the E1B and GFP genes. In addition, the fiber protein was replaced with that of Ad serotype 35, which recognizes human CD46, creating rAdF35-142T-GFP. rAdF35-142T-GFP efficiently labeled not only CAR-positive tumor cells but also CAR-negative tumor cells with GFP. The numbers of false-positive cells were dramatically lower for rAdF35-142T-GFP than for rAd-GFP. CTCs in the blood of cancer patients were detected by rAdF35-142T-GFP with a large reduction in false-positive cells. PMID:26966699
Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep
2013-12-16
The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
2015-01-01
Molecular docking is a powerful tool used in drug discovery and structural biology for predicting the structures of ligand–receptor complexes. However, the accuracy of docking calculations can be limited by factors such as the neglect of protein reorganization in the scoring function; as a result, ligand screening can produce a high rate of false positive hits. Although absolute binding free energy methods still have difficulty in accurately rank-ordering binders, we believe that they can be fruitfully employed to distinguish binders from nonbinders and reduce the false positive rate. Here we study a set of ligands that dock favorably to a newly discovered, potentially allosteric site on the flap of HIV-1 protease. Fragment binding to this site stabilizes a closed form of protease, which could be exploited for the design of allosteric inhibitors. Twenty-three top-ranked protein–ligand complexes from AutoDock were subject to the free energy screening using two methods, the recently developed binding energy analysis method (BEDAM) and the standard double decoupling method (DDM). Free energy calculations correctly identified most of the false positives (≥83%) and recovered all the confirmed binders. The results show a gap averaging ≥3.7 kcal/mol, separating the binders and the false positives. We present a formula that decomposes the binding free energy into contributions from the receptor conformational macrostates, which provides insights into the roles of different binding modes. Our binding free energy component analysis further suggests that improving the treatment for the desolvation penalty associated with the unfulfilled polar groups could reduce the rate of false positive hits in docking. The current study demonstrates that the combination of docking with free energy methods can be very useful for more accurate ligand screening against valuable drug targets. PMID:25189630
Cumulative Incidence of False-Positive Results in Repeated, Multimodal Cancer Screening
Croswell, Jennifer Miller; Kramer, Barnett S.; Kreimer, Aimee R.; Prorok, Phil C.; Xu, Jian-Lun; Baker, Stuart G.; Fagerstrom, Richard; Riley, Thomas L.; Clapp, Jonathan D.; Berg, Christine D.; Gohagan, John K.; Andriole, Gerald L.; Chia, David; Church, Timothy R.; Crawford, E. David; Fouad, Mona N.; Gelmann, Edward P.; Lamerato, Lois; Reding, Douglas J.; Schoen, Robert E.
2009-01-01
PURPOSE Multiple cancer screening tests have been advocated for the general population; however, clinicians and patients are not always well-informed of screening burdens. We sought to determine the cumulative risk of a false-positive screening result and the resulting risk of a diagnostic procedure for an individual participating in a multimodal cancer screening program. METHODS Data were analyzed from the intervention arm of the ongoing Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial, a randomized controlled trial to determine the effects of prostate, lung, colorectal, and ovarian cancer screening on disease-specific mortality. The 68,436 participants, aged 55 to 74 years, were randomized to screening or usual care. Women received serial serum tests to detect cancer antigen 125 (CA-125), transvaginal sonograms, posteroanterior-view chest radiographs, and flexible sigmoidoscopies. Men received serial chest radiographs, flexible sigmoidoscopies, digital rectal examinations, and serum prostate-specific antigen tests. Fourteen screening examinations for each sex were possible during the 3-year screening period. RESULTS After 14 tests, the cumulative risk of having at least 1 false-positive screening test is 60.4% (95% CI, 59.8%–61.0%) for men, and 48.8% (95% CI, 48.1%–49.4%) for women. The cumulative risk after 14 tests of undergoing an invasive diagnostic procedure prompted by a false-positive test is 28.5% (CI, 27.8%–29.3%) for men and 22.1% (95% CI, 21.4%–22.7%) for women. CONCLUSIONS For an individual in a multimodal cancer screening trial, the risk of a false-positive finding is about 50% or greater by the 14th test. Physicians should educate patients about the likelihood of false positives and resulting diagnostic interventions when counseling about cancer screening. PMID:19433838
Female False Positive Exercise Stress ECG Testing - Fact Verses Fiction.
Fitzgerald, Benjamin T; Scalia, William M; Scalia, Gregory M
2018-03-07
Exercise stress testing is a well validated cardiovascular investigation. Accuracy for treadmill stress electrocardiograph (ECG) testing has been documented at 60%. False positive stress ECGs (exercise ECG changes with non-obstructive disease on anatomical testing) are common, especially in women, limiting the effectiveness of the test. This study investigates the incidence and predictors of false positive stress ECG findings, referenced against stress echocardiography (SE) as a standard. Stress echocardiography was performed using the Bruce treadmill protocol. False positive stress ECG tests were defined as greater than 1mm of ST depression on ECG during exertion, without pain, with a normal SE. Potential causes for false positive tests were recorded before the test. Three thousand consecutive negative stress echocardiograms (1036 females, 34.5%) were analysed (age 59+/-14 years. False positive (F+) stress ECGs were documented in 565/3000 tests (18.8%). F+ stress ECGs were equally prevalent in females (194/1036, 18.7%) and males (371/1964, 18.9%, p=0.85 for the difference). Potential causes (hypertension, left ventricular hypertrophy, known coronary disease, arrhythmia, diabetes mellitus, valvular heart disease) were recorded in 36/194 (18.6%) of the female F+ ECG tests and 249/371 (68.2%) of the male F+ ECG tests (p<0.0001 for the difference). These data suggest that F+ stress ECG tests are frequent and equally common in women and men. However, most F+ stress ECGs in men can be predicted before the test, while most in women cannot. Being female may be a risk factor in itself. These data reinforce the value of stress imaging, particularly in women. Copyright © 2018 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). All rights reserved.
Wiwanitkit, Viroj; Udomsantisuk, Nibhond; Boonchalermvichian, Chaiyaporn
2005-06-01
The aim of this study was to evaluate the diagnostic properties of urine Gram stain and urine microscopic examination for screening for urinary tract infection (UTI), and to perform an additional cost utility analysis. This descriptive study was performed on 95 urine samples sent for urine culture to the Department of Microbiology, Faculty of Medicine, Chulalongkorn University. The first part of the study was to determine the diagnostic properties of two screening tests (urine Gram stain and urine microscopic examination). Urine culture was set as the gold standard and the results from both methods were compared to this. The second part of this study was to perform a cost utility analysis. The sensitivity of urine Gram stain was 96.2%, the specificity 93.0%, the positive predictive value 94.3% and the negative predictive value 95.2%. False positives occurred with a frequency of 7.0% and false negatives 3.8%. For the microscopic examination, the sensitivity was 65.4%, specificity 74.4%, positive predictive value 75.6% and negative predictive value 64.0%. False positives occurred with a frequency of 25.6% and false negatives 34.6%. Combining urine Gram stain and urine microscopic examination, the sensitivity was 98.1%, specificity 74.4%, positive predictive value 82.3% and negative predictive value 97.0%. False positives occurred with a frequency of 25.6% and false negatives 1.9%. However, the cost per utility of the combined method was higher than either urine microscopic examination or urine Gram stain alone. Urine Gram stain provided the lowest cost per utility. Economically, urine Gram stain is the proper screening tool for presumptive diagnosis of UTI.
Wakefield, Jerome C; First, Michael B
2012-02-01
The Diagnostic and Statistical Manual of Mental Disorders (DSM) definition of mental disorder requires that symptoms be caused by a dysfunction in the individual; when dysfunction is absent, symptoms represent normal-range distress or eccentricity and, if diagnosed as a mental disorder, are false positives. We hypothesized that because of psychiatry's lack of direct laboratory tests to distinguish dysfunction from normal-range distress, the context in which symptoms occur (eg, lack of imminent danger in a panic attack) is often essential to determining whether symptoms are caused by a dysfunction. If this is right, then the DSM diagnostic criteria should include many contextual criteria added to symptom syndromes to prevent dysfunction false positives. Despite their potential importance, such contextual criteria have not been previously reviewed. We, thus, systematically reviewed DSM categories to establish the extent of such uses of contextual criteria and created a typology of such uses. Of 111 sampled categories, 68 (61%) used context to prevent dysfunction false positives. Contextual criteria fell into 7 types: (1) exclusion of specific false-positive scenarios; (2) requiring that patients experience preconditions for normal responses (eg, requiring that individuals experience adequate sexual stimulation before being diagnosed with sexual dysfunctions); (3) requiring that symptoms be disproportionate relative to circumstances; (4) for childhood disorders, requiring that symptoms be developmentally inappropriate; (5) requiring that symptoms occur in multiple contexts; (6) requiring a substantial discrepancy between beliefs and reality; and (7) a residual category. Most DSM categories include contextual criteria to eliminate false-positive diagnoses and increase validity of descriptive criteria. Future revisions should systematically evaluate each category's need for contextual criteria. Copyright © 2012 Elsevier Inc. All rights reserved.
Genome-wide signals of positive selection in human evolution
Enard, David; Messer, Philipp W.; Petrov, Dmitri A.
2014-01-01
The role of positive selection in human evolution remains controversial. On the one hand, scans for positive selection have identified hundreds of candidate loci, and the genome-wide patterns of polymorphism show signatures consistent with frequent positive selection. On the other hand, recent studies have argued that many of the candidate loci are false positives and that most genome-wide signatures of adaptation are in fact due to reduction of neutral diversity by linked deleterious mutations, known as background selection. Here we analyze human polymorphism data from the 1000 Genomes Project and detect signatures of positive selection once we correct for the effects of background selection. We show that levels of neutral polymorphism are lower near amino acid substitutions, with the strongest reduction observed specifically near functionally consequential amino acid substitutions. Furthermore, amino acid substitutions are associated with signatures of recent adaptation that should not be generated by background selection, such as unusually long and frequent haplotypes and specific distortions in the site frequency spectrum. We use forward simulations to argue that the observed signatures require a high rate of strongly adaptive substitutions near amino acid changes. We further demonstrate that the observed signatures of positive selection correlate better with the presence of regulatory sequences, as predicted by the ENCODE Project Consortium, than with the positions of amino acid substitutions. Our results suggest that adaptation was frequent in human evolution and provide support for the hypothesis of King and Wilson that adaptive divergence is primarily driven by regulatory changes. PMID:24619126
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Mencattini, Arianna; Casti, Paola; Martinelli, Eugenio; di Natale, Corrado; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.
2018-02-01
This paper proposes a method to reduce the number of false-positives (FP) in a computer-aided detection (CAD) scheme for automated detection of architectural distortion (AD) in digital mammography. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automated detection of AD in breast images. The usual approach is automatically detect possible sites of AD in a mammographic image (segmentation step) and then use a classifier to eliminate the false-positives and identify the suspicious regions (classification step). This paper focus on the optimization of the segmentation step to reduce the number of FPs that is used as input to the classifier. The proposal is to use statistical measurements to score the segmented regions and then apply a threshold to select a small quantity of regions that should be submitted to the classification step, improving the detection performance of a CAD scheme. We evaluated 12 image features to score and select suspicious regions of 74 clinical Full-Field Digital Mammography (FFDM). All images in this dataset contained at least one region with AD previously marked by an expert radiologist. The results showed that the proposed method can reduce the false positives of the segmentation step of the CAD scheme from 43.4 false positives (FP) per image to 34.5 FP per image, without increasing the number of false negatives.
Basketter, David A; Gerberick, G Frank; Kimber, Ian
2007-01-01
The local lymph node assay (LLNA) is being used increasingly in the identification of skin sensitizing chemicals for regulatory purposes. In the context of new chemicals legislation (REACH) in Europe, it is the preferred assay. The rationale for this is that the LLNA quantitative and objective approach to skin sensitization testing allied with the important animal welfare benefits that the method offers. However, as with certain guinea pig sensitization tests before it, this increasing use also brings experience with an increasingly wide range of industrial and other chemicals where the outcome of the assay does not always necessarily meet with the expectations of those conducting it. Sometimes, the result appears to be a false negative, but rather more commonly, the complaint is that the chemical represents a false positive. Against this background we have here reviewed a number of instances where false positive and false negative results have been described and have sought to reconcile science with expectation. Based on these analyses, it is our conclusion that false positives and false negatives do occur in the LLNA, as they do with any other skin sensitization assay (and indeed with all tests used for hazard identification), and that this occurs for a number of reasons. We further conclude, however, that false positive results in the LLNA, as with the guinea pig maximization test, arise most commonly via failure to distinguish what is scientifically correct from that which is unpalatable. The consequences of this confusion are discussed in the article, particularly in relation to the need to integrate both potency measurement and risk assessments into classification and labelling schemes that aim to manage potential risks to human health.
Miller, David A W; Nichols, James D; Gude, Justin A; Rich, Lindsey N; Podruzny, Kevin M; Hines, James E; Mitchell, Michael S
2013-01-01
Large-scale presence-absence monitoring programs have great promise for many conservation applications. Their value can be limited by potential incorrect inferences owing to observational errors, especially when data are collected by the public. To combat this, previous analytical methods have focused on addressing non-detection from public survey data. Misclassification errors have received less attention but are also likely to be a common component of public surveys, as well as many other data types. We derive estimators for dynamic occupancy parameters (extinction and colonization), focusing on the case where certainty can be assumed for a subset of detections. We demonstrate how to simultaneously account for non-detection (false negatives) and misclassification (false positives) when estimating occurrence parameters for gray wolves in northern Montana from 2007-2010. Our primary data source for the analysis was observations by deer and elk hunters, reported as part of the state's annual hunter survey. This data was supplemented with data from known locations of radio-collared wolves. We found that occupancy was relatively stable during the years of the study and wolves were largely restricted to the highest quality habitats in the study area. Transitions in the occupancy status of sites were rare, as occupied sites almost always remained occupied and unoccupied sites remained unoccupied. Failing to account for false positives led to over estimation of both the area inhabited by wolves and the frequency of turnover. The ability to properly account for both false negatives and false positives is an important step to improve inferences for conservation from large-scale public surveys. The approach we propose will improve our understanding of the status of wolf populations and is relevant to many other data types where false positives are a component of observations.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
Finkel, Eli J; Eastwick, Paul W; Reis, Harry T
2015-02-01
In recent years, a robust movement has emerged within psychology to increase the evidentiary value of our science. This movement, which has analogs throughout the empirical sciences, is broad and diverse, but its primary emphasis has been on the reduction of statistical false positives. The present article addresses epistemological and pragmatic issues that we, as a field, must consider as we seek to maximize the scientific value of this movement. Regarding epistemology, this article contrasts the false-positives-reduction (FPR) approach with an alternative, the error balance (EB) approach, which argues that any serious consideration of optimal scientific practice must contend simultaneously with both false-positive and false-negative errors. Regarding pragmatics, the movement has devoted a great deal of attention to issues that frequently arise in laboratory experiments and one-shot survey studies, but it has devoted less attention to issues that frequently arise in intensive and/or longitudinal studies. We illustrate these epistemological and pragmatic considerations with the case of relationship science, one of the many research domains that frequently employ intensive and/or longitudinal methods. Specifically, we examine 6 research prescriptions that can help to reduce false-positive rates: preregistration, prepublication sharing of materials, postpublication sharing of data, close replication, avoiding piecemeal publication, and increasing sample size. For each, we offer concrete guidance not only regarding how researchers can improve their research practices and balance the risk of false-positive and false-negative errors, but also how the movement can capitalize upon insights from research practices within relationship science to make the movement stronger and more inclusive. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Are the memories of older adults positively biased?
Fernandes, Myra; Ross, Michael; Wiegand, Melanie; Schryer, Emily
2008-06-01
There is disagreement in the literature about whether a "positivity effect" in memory performance exists in older adults. To assess the generalizability of the effect, the authors examined memory for autobiographical, picture, and word information in a group of younger (17-29 years old) and older (60-84 years old) adults. For the autobiographical memory task, the authors asked participants to produce 4 positive, 4 negative, and 4 neutral recent autobiographical memories and to recall these a week later. For the picture and word tasks, participants studied photos or words of different valences (positive, negative, neutral) and later remembered them on a free-recall test. The authors found significant correlations in memory performance, across task material, for recall of both positive and neutral valence autobiographical events, pictures, and words. When the authors examined accurate memories, they failed to find consistent evidence, across the different types of material, of a positivity effect in either age group. However, the false memory findings offer more consistent support for a positivity effect in older adults. During recall of all 3 types of material, older participants recalled more false positive than false negative memories.
48 CFR 53.105 - Computer generation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Computer generation. 53...) CLAUSES AND FORMS FORMS General 53.105 Computer generation. (a) Agencies may computer-generate the... be computer generated by the public. Unless prohibited by agency regulations, forms prescribed by...
48 CFR 53.105 - Computer generation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Computer generation. 53...) CLAUSES AND FORMS FORMS General 53.105 Computer generation. (a) Agencies may computer-generate the... be computer generated by the public. Unless prohibited by agency regulations, forms prescribed by...
Analysis of Document Authentication Technique using Soft Magnetic Fibers
NASA Astrophysics Data System (ADS)
Aoki, Ayumi; Ikeda, Takashi; Yamada, Tsutomu; Takemura, Yasushi; Matsumoto, Tsutomu
An artifact-metric system using magnetic fibers can be applied for authentications of stock certificate, bill, passport, plastic cards and other documents. Security of the system is guaranteed by its feature of difficulty in copy. This authentication system is based on randomly dispersed magnetic fibers embedded in documents. In this paper, a theoretical analysis was performed in order to evaluate this system. The position of the magnetic fibers was determined by a conventional function of random number generator. By measuring output waveforms by a magnetoresistance (MR) sensor, a false match rate (FMR) could be calculated. Optimizations of the density of the magnetic fibers and the dimension of the MR sensor were achieved.
Diagnostic accuracy of blood centers in the screening of blood donors for viral markers
Dogbe, Elliot Eli; Arthur, Fareed
2015-01-01
Introduction Blood transfusion still remains a life saving intervention in almost all healthcare facilities worldwide. Screening of blood donors/blood units is done in almost every blood bank facility before the blood units/blood components are transfused to prevent transfusion-transmissible infections. The kind of testing kits or the methods used by a facility and the technical expertise of the personnel greatly affects the screening results of a facility. This study was aimed at evaluating the diagnostic accuracy of five hospital-based blood bank testing facilities (Komfo Anokye Teaching Hospital KNUST, Kwame Nkrumah University of Science and Technology, Agogo, Bekwai and Sunyani) that used rapid immunochromatograhic assays (RIA) in screening blood donors/blood units in Ghana. Methods Blood samples (300) from the five testing facilities and their screening results for hepatitis B surface antigen (HBsAg), antibodies to hepatitis C virus (HCV) and human immunodeficiency virus (HIV) using RIAs were obtained. All the samples were then analysed for the three viral markers using 3rd generational enzyme linked immunosorbent assay (ELISA) kit as the gold standard. Results The mean false positive for HBsAg was 2.2% with Bekwai testing facility having the highest of 4.4%. For HCV, the mean false positive was 2.8% with Agogo and Bekwai testing facilities having the highest of 8.7% respectively. For HIV screening, the mean false positive was 11.1% with Bekwai testing facility having the highest of 28.0%. The mean false negative for the facilities were 3.0% for HBV, 75.0% for HCV and 0.0% for HIV with KATH having the highest of 6.3% for HBV, Bekwai having the highest of 100% for HCV and no facility showing false negative for HIV. Mean sensitivity of the screening procedure for the facilities was 97.0%, 25.0% and 100.0% whilst the mean specificity was 97.8%, 97.2% and 88.9% for HBV, HCV and HIV respectively. Statistical comparison among the testing facilities showed no significant differences among the various testing centres for HBV screening; however, significant differences were obtained for HCV and HIV screening. Conclusion This study has shown that there is no standardised screening procedure for blood bank testing facilities in the country. There is therefore an urgent need for an internal and external control body to oversee screening procedures in blood banks across the country. PMID:26090067
Serological diagnosis of bovine brucellosis using B. melitensis strain B115.
Corrente, Marialaura; Desario, Costantina; Parisi, Antonio; Grandolfo, Erika; Scaltrito, Domenico; Vesco, Gesualdo; Colao, Valeriana; Buonavoglia, Domenico
2015-12-01
Bovine brucellosis is diagnosed by official tests, such as Rose Bengal plate test (RBPT) and Complement Fixation test (CFT). Both tests detect antibodies directed against the lipolysaccharide (LPS) of Brucella cell wall. Despite their good sensitivity, those tests do not discriminate between true positive and false positive serological reactions (FPSR), the latter being generated by animals infected with other Gram negative microorganisms that share components of Brucella LPS. In this study, an antigenic extract from whole Brucella melitensis B115 strain was used to set up an ELISA assay for the serological diagnosis of bovine brucellosis. A total of 148 serum samples from five different groups of animals were tested: Group A: 28 samples from two calves experimentally infected with Yersinia enterocolitica O:9; Group B: 30 samples from bovines infected with Brucella abortus; Group C: 50 samples from brucellosis-free herds; Group D: 20 samples RBPT positive and CFT negative; Group E: 20 samples both RBPT and CFT positive. Group D and Group E serum samples were from brucellosis-free herds. Positive reactions were detected only by RBPT and CFT in calves immunized with Y. enterocolitica O:9. Sera from Group B animals tested positive also in the ELISA assay, whereas sera from the remaining groups were all negative. The results obtained encourage the use of the ELISA assay to implement the serological diagnosis of brucellosis. Copyright © 2015 Elsevier B.V. All rights reserved.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Standoff detection of chemical and biological threats using laser-induced breakdown spectroscopy.
Gottfried, Jennifer L; De Lucia, Frank C; Munson, Chase A; Miziolek, Andrzej W
2008-04-01
Laser-induced breakdown spectroscopy (LIBS) is a promising technique for real-time chemical and biological warfare agent detection in the field. We have demonstrated the detection and discrimination of the biological warfare agent surrogates Bacillus subtilis (BG) (2% false negatives, 0% false positives) and ovalbumin (0% false negatives, 1% false positives) at 20 meters using standoff laser-induced breakdown spectroscopy (ST-LIBS) and linear correlation. Unknown interferent samples (not included in the model), samples on different substrates, and mixtures of BG and Arizona road dust have been classified with reasonable success using partial least squares discriminant analysis (PLS-DA). A few of the samples tested such as the soot (not included in the model) and the 25% BG:75% dust mixture resulted in a significant number of false positives or false negatives, respectively. Our preliminary results indicate that while LIBS is able to discriminate biomaterials with similar elemental compositions at standoff distances based on differences in key intensity ratios, further work is needed to reduce the number of false positives/negatives by refining the PLS-DA model to include a sufficient range of material classes and carefully selecting a detection threshold. In addition, we have demonstrated that LIBS can distinguish five different organophosphate nerve agent simulants at 20 meters, despite their similar stoichiometric formulas. Finally, a combined PLS-DA model for chemical, biological, and explosives detection using a single ST-LIBS sensor has been developed in order to demonstrate the potential of standoff LIBS for universal hazardous materials detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Lawrence R.; Hoyt, David W.; Walker, S. Michael
We present a novel approach to improve accuracy of metabolite identification by combining direct infusion ESI MS1 with 1D 1H NMR spectroscopy. The new approach first applies standard 1D 1H NMR metabolite identification protocol by matching the chemical shift, J-coupling and intensity information of experimental NMR signals against the NMR signals of standard metabolites in metabolomics library. This generates a list of candidate metabolites. The list contains false positive and ambiguous identifications. Next, we constrained the list with the chemical formulas derived from high-resolution direct infusion ESI MS1 spectrum of the same sample. Detection of the signals of a metabolitemore » both in NMR and MS significantly improves the confidence of identification and eliminates false positive identification. 1D 1H NMR and direct infusion ESI MS1 spectra of a sample can be acquired in parallel in several minutes. This is highly beneficial for rapid and accurate screening of hundreds of samples in high-throughput metabolomics studies. In order to make this approach practical, we developed a software tool, which is integrated to Chenomx NMR Suite. The approach is demonstrated on a model mixture, tomato and Arabidopsis thaliana metabolite extracts, and human urine.« less
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
Wollert, Richard; Cramer, Elliot
2011-01-01
Psychiatrist and Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) text editor Michael First has criticized the addition of victim counts to criteria proposed by the Paraphilia Sub-Workgroup for inclusion in DSM-5 because they will increase false-positive diagnoses. Psychologist and Chair of the DSM-5 Paraphilia Sub-Workgroup, Ray Blanchard, responded by publishing a study of pedohebephiles and teleiophiles which seemed to show that victim counts could accurately identify pedohebephiles who were selected per self-report and phallometric testing. His analysis was flawed because it did not conform to conventional clinical practice and because he sampled groups at opposite ends of the clinical spectrum. In an analysis of his full sample, we found the false-positive rate for pedohebephilia at the recommended victim count selection points was indeed very large. Why? Because data analyses that eliminate intermediate data points will generate inflated estimates of correlation coefficients, base rates, and the discriminative capacity of predictor variables. This principle is also relevant for understanding the flaws in previous research that led Hanson and Bussiere to conclude that sexual recidivism was correlated with "sexual interest in children as measured by phallometric assessment." The credibility of mental health professionals rests on the reliability of their research. Conducting, publishing, and citing research that reflects Copyright © 2011 John Wiley & Sons, Ltd.
McAuliffe, Gary N; Taylor, Susan L; Drinković, Dragana; Roberts, Sally A; Wilson, Elizabeth M; Best, Emma J
2018-01-01
In July 2014, New Zealand introduced universal infant vaccination with RotaTeq (Merk & Co.) administered as 3 doses at 6 weeks, 3 and 5 months of age. We sought to assess the impact of rotavirus vaccination on gastroenteritis (GE) hospitalizations in the greater Auckland region and analyze changes in rotavirus testing in the period around vaccine introduction. Hospitalizations, laboratory testing rates and methods were compared between the pre-vaccine period (2009-2013), post-vaccine period (January 2015 to December 2015) and year of vaccine introduction (2014). There was a 68% decline in rotavirus hospitalizations of children <5 years of age after vaccine introduction (from 258/100,000 to 83/100,000) and a 17% decline in all-cause gastroenteritis admissions (from 1815/100,000 to 1293/100,000). Reductions were also seen in pediatric groups too old to have received vaccine. Despite these changes, rotavirus testing rates in our region remained static in the year after vaccine introduction compared with the 2 prior years, and after vaccine introduction, we observed a high rate of false positives 19/58 (33%) in patients with reactive rotavirus tests. Rotavirus vaccine has had a significant early impact on gastroenteritis hospitalizations for children in the Auckland region. However, continued rotavirus testing at pre-vaccine rates risks generating false positive results. Laboratories and clinicians should consider reviewing their testing algorithms before vaccine introduction.
Banks, Emily; Reeves, Gillian; Beral, Valerie; Bull, Diana; Crossley, Barbara; Simmonds, Moya; Hilton, Elizabeth; Bailey, Stephen; Barrett, Nigel; Briers, Peter; English, Ruth; Jackson, Alan; Kutt, Elizabeth; Lavelle, Janet; Rockall, Linda; Wallis, Matthew G; Wilson, Mary; Patnick, Julietta
2006-01-01
Introduction Current and recent users of hormone replacement therapy (HRT) have an increased risk of being recalled to assessment at mammography without breast cancer being diagnosed ('false positive recall'), but there is limited information on the effects of different patterns of HRT use on this. The aim of this study is to investigate in detail the relationship between patterns of use of HRT and false positive recall. Methods A total of 87,967 postmenopausal women aged 50 to 64 years attending routine breast cancer screening at 10 UK National Health Service Breast Screening Units from 1996 to 1998 joined the Million Women Study by completing a questionnaire before screening and were followed for their screening outcome. Results Overall, 399 (0.5%) participants were diagnosed with breast cancer and 2,629 (3.0%) had false positive recall. Compared to never users of HRT, the adjusted relative risk (95% CI) of false positive recall was: 1.62 (1.43–1.83), 1.80 (1.62–2.01) and 0.76 (0.52–1.10) in current users of oestrogen-only HRT, oestrogen-progestagen HRT and tibolone, respectively (p (heterogeneity) < 0.0001); 1.65 (1.43–1.91), 1.49 (1.22–1.81) and 2.11 (1.45–3.07) for current HRT used orally, transdermally or via an implant, respectively (p (heterogeneity) = 0.2); and 1.84 (1.67–2.04) and 1.75 (1.49–2.06) for sequential and continuous oestrogen-progestagen HRT, respectively (p (heterogeneity) = 0.6). The relative risk of false positive recall among current users appeared to increase with increasing time since menopause, but did not vary significantly according to any other factors examined, including duration of use, hormonal constituents, dose, whether single- or two-view screening was used, or the woman's personal characteristics. Conclusion Current use of oestrogen-only and oestrogen-progestagen HRT, but not tibolone, increases the risk of false positive recall at screening. PMID:16417651
49 CFR 229.105 - Steam generator number.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Steam generator number. 229.105 Section 229.105..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Safety Requirements Steam Generators § 229.105 Steam generator number. An identification number shall be marked on the steam generator's...
False-positive cerebrospinal fluid cryptococcus antigen in Libman-Sacks endocarditis.
Isseh, Iyad N; Bourgi, Kassem; Nakhle, Asaad; Ali, Mahmoud; Zervos, Marcus J
2016-12-01
Cryptococcus meningoencephalitis is a serious opportunistic infection associated with high morbidity and mortality in immunocompromised hosts, particularly patients with advanced AIDS disease. The diagnosis is established through cerebrospinal fluid (CSF) cryptococcus antigen detection and cultures. Cryptococcus antigen testing is usually the initial test of choice due its high sensitivity and specificity along with the quick availability of the results. We hereby report a case of a false-positive CSF cryptococcus antigen assay in a patient with systemic lupus erythematosus presenting with acute confusion. While initial CSF evaluation revealed a positive cryptococcus antigen assay, the patient's symptoms were inconsistent with cryptococcus meningoencephalitis. A repeat CSF evaluation, done 3 days later, revealed a negative CSF cryptococcus antigen assay. Given the patient's active lupus disease and the elevated antinuclear antibody titers, we believe that the initial positive result was a false positive caused by interference from autoantibodies.
Mahfouz, Ayman; Naji, Meeran; Mok, Wing Yan; Taghi, Ali S; Win, Zarni
2015-09-01
A false-positive uptake of F18-fluorodeoxyglucose (FDG) on positron-emission tomography/computed tomography (PET/CT) can result in confusion and misinterpretation of scans. Such uptakes have been previously described after injection of polytetrafluoroethylene (Teflon) into the vocal folds. Similarly, vocal fold injection of silicone elastomer (Silastic) can result not only in a false-positive FDG uptake on PET/CT, but also in chronic inflammation. We report a case of increased FDG uptake in a vocal fold after Silastic injection that was misinterpreted as a malignancy in a 70-year-old woman who had metastatic carcinoma of the stomach.
HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites
Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng’ang’a, Anne; Andre, Bita; Zahinda, Jean-Paul BN; Fransen, Katrien; Page, Anne-Laure
2017-01-01
Abstract Introduction: We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. Methods: In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Results: Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. Conclusions: The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy. PMID:28691437
HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites.
Kosack, Cara S; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng'ang'a, Anne; Andre, Bita; Zahinda, Jean-Paul Bn; Fransen, Katrien; Page, Anne-Laure
2017-07-03
We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy.
How does negative emotion cause false memories?
Brainerd, C J; Stein, L M; Silveira, R A; Rohenkohl, G; Reyna, V F
2008-09-01
Remembering negative events can stimulate high levels of false memory, relative to remembering neutral events. In experiments in which the emotional valence of encoded materials was manipulated with their arousal levels controlled, valence produced a continuum of memory falsification. Falsification was highest for negative materials, intermediate for neutral materials, and lowest for positive materials. Conjoint-recognition analysis produced a simple process-level explanation: As one progresses from positive to neutral to negative valence, false memory increases because (a) the perceived meaning resemblance between false and true items increases and (b) subjects are less able to use verbatim memories of true items to suppress errors.
A Statistical Method to Distinguish Functional Brain Networks
Fujita, André; Vidal, Maciel C.; Takahashi, Daniel Y.
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism (p < 0.001). PMID:28261045
A Statistical Method to Distinguish Functional Brain Networks.
Fujita, André; Vidal, Maciel C; Takahashi, Daniel Y
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism ( p < 0.001).
DOT National Transportation Integrated Search
1974-05-01
A resting 'normal' ECG can coexist with known angina pectoris, positive angiocardiography and previous myocardial infarction. In contemporary exercise ECG tests, a false positive/false negative total error of 10% is not unusual. Research aimed at imp...
Finkelstein's test: a descriptive error that can produce a false positive.
Elliott, B G
1992-08-01
Over the last three decades an error in performing Finkelstein's test has crept into the English literature in both text books and journals. This error can produce a false-positive, and if relied upon, a wrong diagnosis can be made, leading to inappropriate surgery.
A Demonstration of Regression False Positive Selection in Data Mining
ERIC Educational Resources Information Center
Pinder, Jonathan P.
2014-01-01
Business analytics courses, such as marketing research, data mining, forecasting, and advanced financial modeling, have substantial predictive modeling components. The predictive modeling in these courses requires students to estimate and test many linear regressions. As a result, false positive variable selection ("type I errors") is…
Nigro, Olivia D; Steward, Grieg F
2015-04-01
Plating environmental samples on vibrio-selective chromogenic media is a commonly used technique that allows one to quickly estimate concentrations of putative vibrio pathogens or to isolate them for further study. Although this approach is convenient, its usefulness depends directly on how well the procedure selects against false positives. We tested whether a chromogenic medium, CHROMagar Vibrio (CaV), used alone (single-plating) or in combination (double-plating) with a traditional medium thiosulfate-citrate-bile-salts (TCBS), could improve the discrimination among three pathogenic vibrio species (Vibrio cholerae, Vibrio parahaemolyticus, and Vibrio vulnificus) and thereby decrease the number of false-positive colonies that must be screened by molecular methods. Assays were conducted on water samples from two estuarine environments (one subtropical, one tropical) in a variety of seasonal conditions. The results of the double-plating method were confirmed by PCR and 16S rRNA sequencing. Our data indicate that there is no significant difference in the false-positive rate between CaV and TCBS when using a single-plating technique, but determining color changes on the two media sequentially (double-plating) reduced the rate of false positive identification in most cases. The improvement achieved was about two-fold on average, but varied greatly (from 0- to 5-fold) and depended on the sampling time and location. The double-plating method was most effective for V. vulnificus in warm months, when overall V. vulnificus abundance is high (false positive rates as low as 2%, n=178). Similar results were obtained for V. cholerae (minimum false positive rate of 16%, n=146). In contrast, the false positive rate for V. parahaemolyticus was always high (minimum of 59%, n=109). Sequence analysis of false-positive isolates indicated that the majority of confounding isolates are from the Vibrionaceae family, however, members of distantly related bacterial groups were also able to grow on vibrio-selective media, even when using the double-plating method. In conclusion, the double-plating assay is a simple means to increase the efficiency of identifying pathogenic vibrios in aquatic environments and to reduce the number of molecular assays required for identity confirmation. However, the high spatial and temporal variability in the performance of the media mean that molecular approaches are still essential to obtain the most accurate vibrio abundance estimates from environmental samples. Copyright © 2015 Elsevier B.V. All rights reserved.
The effect of mood on false memory for emotional DRM word lists.
Zhang, Weiwei; Gross, Julien; Hayne, Harlene
2017-04-01
In the present study, we investigated the effect of participants' mood on true and false memories of emotional word lists in the Deese-Roediger-McDermott (DRM) paradigm. In Experiment 1, we constructed DRM word lists in which all the studied words and corresponding critical lures reflected a specified emotional valence. In Experiment 2, we used these lists to assess mood-congruent true and false memory. Participants were randomly assigned to one of three induced-mood conditions (positive, negative, or neutral) and were presented with word lists comprised of positive, negative, or neutral words. For both true and false memory, there was a mood-congruent effect in the negative mood condition; this effect was due to a decrease in true and false recognition of the positive and neutral words. These findings are consistent with both spreading-activation and fuzzy-trace theories of DRM performance and have practical implications for our understanding of the effect of mood on memory.
Use of the false discovery rate for evaluating clinical safety data.
Mehrotra, Devan V; Heyse, Joseph F
2004-06-01
Clinical adverse experience (AE) data are routinely evaluated using between group P values for every AE encountered within each of several body systems. If the P values are reported and interpreted without multiplicity considerations, there is a potential for an excess of false positive findings. Procedures based on confidence interval estimates of treatment effects have the same potential for false positive findings as P value methods. Excess false positive findings can needlessly complicate the safety profile of a safe drug or vaccine. Accordingly, we propose a novel method for addressing multiplicity in the evaluation of adverse experience data arising in clinical trial settings. The method involves a two-step application of adjusted P values based on the Benjamini and Hochberg false discovery rate (FDR). Data from three moderate to large vaccine trials are used to illustrate our proposed 'Double FDR' approach, and to reinforce the potential impact of failing to account for multiplicity. This work was in collaboration with the late Professor John W. Tukey who coined the term 'Double FDR'.
Rahal, M; Kervaire, B; Villard, J; Tiercy, J-M
2008-03-01
Human leukocyte antigen (HLA) typing by polymerase chain reaction-sequence-specific oligonucleotide (PCR-SSO) hybridization on solid phase (microbead assay) or polymerase chain reaction-sequence-specific primers (PCR-SSP) requires interpretation softwares to detect all possible allele combinations. These programs propose allele calls by taking into account false-positive or false-negative signal(s). The laboratory has the option to validate typing results in the presence of strongly cross-reacting or apparent false-negative signals. Alternatively, these seemingly aberrant signals may disclose novel variants. We report here four new HLA-B (B*5620 and B*5716) and HLA-DRB1 alleles (DRB1*110107 and DRB1*1474) that were detected by apparent false-negative or -positive hybridization or amplification patterns, and ultimately resolved by sequencing. To avoid allele misassignments, a comprehensive evaluation of acquired data as documented in a quality assurance system is therefore required to confirm unambiguous typing interpretation.
Are false-positive rates leading to an overestimation of noise-induced hearing loss?
Schlauch, Robert S; Carney, Edward
2011-04-01
To estimate false-positive rates for rules proposed to identify early noise-induced hearing loss (NIHL) using the presence of notches in audiograms. Audiograms collected from school-age children in a national survey of health and nutrition (the Third National Health and Nutrition Examination Survey [NHANES III]; National Center for Health Statistics, 1994) were examined using published rules for identifying noise notches at various pass-fail criteria. These results were compared with computer-simulated "flat" audiograms. The proportion of these identified as having a noise notch is an estimate of the false-positive rate for a particular rule. Audiograms from the NHANES III for children 6-11 years of age yielded notched audiograms at rates consistent with simulations, suggesting that this group does not have significant NIHL. Further, pass-fail criteria for rules suggested by expert clinicians, applied to NHANES III audiometric data, yielded unacceptably high false-positive rates. Computer simulations provide an effective method for estimating false-positive rates for protocols used to identify notched audiograms. Audiometric precision could possibly be improved by (a) eliminating systematic calibration errors, including a possible problem with reference levels for TDH-style earphones; (b) repeating and averaging threshold measurements; and (c) using earphones that yield lower variability for 6.0 and 8.0 kHz--2 frequencies critical for identifying noise notches.
False-positive cancer screens and health-related quality of life.
McGovern, Patricia M; Gross, Cynthia R; Krueger, Richard A; Engelhard, Deborah A; Cordes, Jill E; Church, Timothy R
2004-01-01
By design, screening tests are imperfect-unresponsive to some cancers (false negatives) while occasionally raising suspicion of cancer where none exists (false positives). This pilot study describes patients' responses to having a false-positive screening test for cancer, and identifies screening effects on health-related quality of life (HRQoL). The pilot findings suggest issues important for incorporation in future evaluations of the impact of screening for prostate, lung, colon, or ovarian (PLCO) cancers. Seven focus groups were conducted to identify the nature and meaning of all phases of PLCO screening. Minnesota participants in the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial who had completed screening, with at least 1 false-positive screen, participated (N = 47). Participants' reactions to abnormal screens and diagnostic work-ups were primarily emotional (eg, anxiety and distress), not physical, and ultimately positive for the majority. Health distress and fear of cancer and death were the major negative aspects of HRQoL identified. These concepts are not typically included in generic HRQoL questionnaires like the SF-36, but are highly relevant to PLCO screening. Clinicians were regarded as underestimating the discomfort of follow-up diagnostic testing. However, relief and assurance appeared to eventually outweigh the negative emotions for most participants. Implications for oncology nurses include the need to consider the emotional consequences of screening in association with screen reliability and validity.
Comparison of scanty AFB smears against culture in an area with high HIV prevalence.
Lawson, L; Yassin, M A; Ramsay, A; Emenyonu, N E; Squire, S B; Cuevas, L E
2005-08-01
To verify among tuberculosis (TB) suspects attending hospitals in Abuja, Nigeria, if sputum smears graded as scanty are false-positive, sputum smears from 1068 patients were graded with the International Union Against Tuberculosis and Lung Disease classification. One specimen was cultured. Eight hundred and twenty-four (26%) smears were positive, 137 (4%) scanty and 2243 negative. Of 1068 cultures, 680 (64%) were positive. One hundred and thirty (95%) scanty and 809 (98%) positive smears were culture-positive. Twelve of 18 patients with a single scanty smear and 51 of 52 with > or = 2 scanty smears were culture-positive. Fewer than < 5% scanty results, < 1% of the patients treated for TB, are false-positive.
Patriquin, Glenn; LeBlanc, Jason; Heinstein, Charles; Roberts, Catherine; Lindsay, Robbin; Hatchette, Todd F
2016-03-01
Increased rates of Lyme disease and syphilis in the same geographic area prompted an assessment of screening test cross-reactivity. This study supports the previously described cross-reactivity of Lyme screening among syphilis-positive sera and reports evidence against the possibility of false-positive syphilis screening tests resulting from previous Borrelia burgdorferi infection. Copyright © 2016 Elsevier Inc. All rights reserved.
Lee, Jae Seok; Kim, Eui-Chong; Joo, Sei Ick; Lee, Sang-Min; Yoo, Chul-Gyu; Kim, Young Whan; Han, Sung Koo; Shim, Young-Soo; Yim, Jae-Joon
2008-10-01
Although it is not rare to find sputum that is positive acid-fast bacilli (AFB) smear but subsequent culture fails to isolate mycobacteria in clinical practice, the incidence and clinical implication of those sputa from new patients has not been clearly elucidated. The aim of this study was to determine the incidence and clinical implication of sputum with positive AFB smear but negative in mycobacterial culture. All sputa that were positive AFB smear requested during diagnostic work up for new patients visiting Seoul National University Hospital from 1 January 2005 through 31 December 2006 were included. Sputa producing a positive AFB smear but negative mycobacterial culture were classified into one of four categories: laboratory failure to isolate mycobacteria, false positive AFB smear, pathogen may show a positive AFB smear other than mycobacteria, and indeterminate results. Out of 447 sputa with a positive AFB smear, 29 (6.5%) failed to culture any organism. Among these 29 sputa, 18 were caused by laboratory failure to isolate mycobacteria, six were false positive smears, and five indeterminate. Although most sputum with a positive AFB smear but negative culture could be classified as a laboratory failure, clinicians should consider the possibility of false positive AFB smear.
Potential for false positive HIV test results with the serial rapid HIV testing algorithm.
Baveewo, Steven; Kamya, Moses R; Mayanja-Kizza, Harriet; Fatch, Robin; Bangsberg, David R; Coates, Thomas; Hahn, Judith A; Wanyenze, Rhoda K
2012-03-19
Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals.
Potential for false positive HIV test results with the serial rapid HIV testing algorithm
2012-01-01
Background Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Results Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Conclusion Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals. PMID:22429706
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
miRCat2: accurate prediction of plant and animal microRNAs from next-generation sequencing datasets
Paicu, Claudia; Mohorianu, Irina; Stocks, Matthew; Xu, Ping; Coince, Aurore; Billmeier, Martina; Dalmay, Tamas; Moulton, Vincent; Moxon, Simon
2017-01-01
Abstract Motivation MicroRNAs are a class of ∼21–22 nt small RNAs which are excised from a stable hairpin-like secondary structure. They have important gene regulatory functions and are involved in many pathways including developmental timing, organogenesis and development in eukaryotes. There are several computational tools for miRNA detection from next-generation sequencing datasets. However, many of these tools suffer from high false positive and false negative rates. Here we present a novel miRNA prediction algorithm, miRCat2. miRCat2 incorporates a new entropy-based approach to detect miRNA loci, which is designed to cope with the high sequencing depth of current next-generation sequencing datasets. It has a user-friendly interface and produces graphical representations of the hairpin structure and plots depicting the alignment of sequences on the secondary structure. Results We test miRCat2 on a number of animal and plant datasets and present a comparative analysis with miRCat, miRDeep2, miRPlant and miReap. We also use mutants in the miRNA biogenesis pathway to evaluate the predictions of these tools. Results indicate that miRCat2 has an improved accuracy compared with other methods tested. Moreover, miRCat2 predicts several new miRNAs that are differentially expressed in wild-type versus mutants in the miRNA biogenesis pathway. Availability and Implementation miRCat2 is part of the UEA small RNA Workbench and is freely available from http://srna-workbench.cmp.uea.ac.uk/. Contact v.moulton@uea.ac.uk or s.moxon@uea.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:28407097
False positive computed tomographic angiography for Stanford type A aortic dissection.
Bandali, Murad F; Hatem, Muhammed A; Appoo, Jehangir J; Hutchison, Stuart J; Wong, Jason K
2015-12-01
Computed tomographic angiography (CTA) has emerged as the defacto imaging test to rule out acute aortic dissection; however, it is not without flaws. We report a case of a false-positive CTA with respect to Stanford Type A aortic dissection. A 52 year-old male presented with sudden onset shortness of breath. He denied chest pain. Due to severe hypertension and an Emergency Department bedside ultrasound suggesting an intimal flap in the aorta, CTA was requested to better assess the ascending aorta and was interpreted as consistent with Stanford Type A aortic dissection with thrombosis of the false lumen in the ascending aorta. However, intra-operative imaging (TEE and epi-aortic scanning) did not identify an intimal flap or dissection, and neither did definitive surgical inspection of the aorta. The suspected aortic dissection and thrombosed false lumen were not visualized on repeat CTA two days later. False positive diagnosis of Stanford Type A aortic dissection on CTA can be the result of technical factors, streak artifacts, motion artifacts, and periaortic structures. In this case, non-uniform arterial contrast enhancement secondary to unrecognized biventricular dysfunction resulted in the false positive CTA appearance of an intimal flap and mural thrombus. Intra-operative TEE and epi-aortic scanning were proven correct in excluding aortic dissection by the standard of definitive surgical inspection of the aorta.
Shanks, Leslie; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Pirou, Erwan; Ritmeijer, Koert; Masiga, Johnson; Abebe, Almaz
2015-02-03
In Ethiopia a tiebreaker algorithm using 3 rapid diagnostic tests (RDTs) in series is used to diagnose HIV. Discordant results between the first 2 RDTs are resolved by a third 'tiebreaker' RDT. Médecins Sans Frontières uses an alternate serial algorithm of 2 RDTs followed by a confirmation test for all double positive RDT results. The primary objective was to compare the performance of the tiebreaker algorithm with a serial algorithm, and to evaluate the addition of a confirmation test to both algorithms. A secondary objective looked at the positive predictive value (PPV) of weakly reactive test lines. The study was conducted in two HIV testing sites in Ethiopia. Study participants were recruited sequentially until 200 positive samples were reached. Each sample was re-tested in the laboratory on the 3 RDTs and on a simple to use confirmation test, the Orgenics Immunocomb Combfirm® (OIC). The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. 2620 subjects were included with a HIV prevalence of 7.7%. Each of the 3 RDTs had an individual specificity of at least 99%. The serial algorithm with 2 RDTs had a single false positive result (1 out of 204) to give a PPV of 99.5% (95% CI 97.3%-100%). The tiebreaker algorithm resulted in 16 false positive results (PPV 92.7%, 95% CI: 88.4%-95.8%). Adding the OIC confirmation test to either algorithm eliminated the false positives. All the false positives had at least one weakly reactive test line in the algorithm. The PPV of weakly reacting RDTs was significantly lower than those with strongly positive test lines. The risk of false positive HIV diagnosis in a tiebreaker algorithm is significant. We recommend abandoning the tie-breaker algorithm in favour of WHO recommended serial or parallel algorithms, interpreting weakly reactive test lines as indeterminate results requiring further testing except in the setting of blood transfusion, and most importantly, adding a confirmation test to the RDT algorithm. It is now time to focus research efforts on how best to translate this knowledge into practice at the field level. Clinical Trial registration #: NCT01716299.
Min-max hyperellipsoidal clustering for anomaly detection in network security.
Sarasamma, Suseela T; Zhu, Qiuming A
2006-08-01
A novel hyperellipsoidal clustering technique is presented for an intrusion-detection system in network security. Hyperellipsoidal clusters toward maximum intracluster similarity and minimum intercluster similarity are generated from training data sets. The novelty of the technique lies in the fact that the parameters needed to construct higher order data models in general multivariate Gaussian functions are incrementally derived from the data sets using accretive processes. The technique is implemented in a feedforward neural network that uses a Gaussian radial basis function as the model generator. An evaluation based on the inclusiveness and exclusiveness of samples with respect to specific criteria is applied to accretively learn the output clusters of the neural network. One significant advantage of this is its ability to detect individual anomaly types that are hard to detect with other anomaly-detection schemes. Applying this technique, several feature subsets of the tcptrace network-connection records that give above 95% detection at false-positive rates below 5% were identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The invention improves accuracy of metabolite identification by combining direct infusion ESI-MS with one-dimensional 1H-NMR spectroscopy. First, we apply a standard 1H-NMR metabolite identification protocol by matching the chemical shift, J-coupling and intensity information of experimental NMR signals against the NMR signals of standard metabolites in a metabolomics reference libraries. This generates a list of candidate metabolites. The list contains both false positive and ambiguous identifications. The software tool (the invention) takes the list of candidate metabolites, generated from NMRbased metabolite identification, and then calculates, for each of the candidate metabolites, the monoisotopic mass-tocharge (m/z) ratios for each commonly observedmore » ion, fragment and adduct feature. These are then used to assign m/z ratios in experimental ESI-MS spectra of the same sample. Detection of the signals of a given metabolite in both NMR and MS spectra resolves the ambiguities, and therefore, significantly improves the confidence of the identification.« less
Shadow-Based Vehicle Detection in Urban Traffic
Ibarra-Arenado, Manuel; Tjahjadi, Tardi; Pérez-Oria, Juan; Robla-Gómez, Sandra; Jiménez-Avello, Agustín
2017-01-01
Vehicle detection is a fundamental task in Forward Collision Avoiding Systems (FACS). Generally, vision-based vehicle detection methods consist of two stages: hypotheses generation and hypotheses verification. In this paper, we focus on the former, presenting a feature-based method for on-road vehicle detection in urban traffic. Hypotheses for vehicle candidates are generated according to the shadow under the vehicles by comparing pixel properties across the vertical intensity gradients caused by shadows on the road, and followed by intensity thresholding and morphological discrimination. Unlike methods that identify the shadow under a vehicle as a road region with intensity smaller than a coarse lower bound of the intensity for road, the thresholding strategy we propose determines a coarse upper bound of the intensity for shadow which reduces false positives rates. The experimental results are promising in terms of detection performance and robustness in day time under different weather conditions and cluttered scenarios to enable validation for the first stage of a complete FACS. PMID:28448465
Fujimori, Shigeo; Hirai, Naoya; Ohashi, Hiroyuki; Masuoka, Kazuyo; Nishikimi, Akihiko; Fukui, Yoshinori; Washio, Takanori; Oshikubo, Tomohiro; Yamashita, Tatsuhiro; Miyamoto-Sato, Etsuko
2012-01-01
Next-generation sequencing (NGS) has been applied to various kinds of omics studies, resulting in many biological and medical discoveries. However, high-throughput protein-protein interactome datasets derived from detection by sequencing are scarce, because protein-protein interaction analysis requires many cell manipulations to examine the interactions. The low reliability of the high-throughput data is also a problem. Here, we describe a cell-free display technology combined with NGS that can improve both the coverage and reliability of interactome datasets. The completely cell-free method gives a high-throughput and a large detection space, testing the interactions without using clones. The quantitative information provided by NGS reduces the number of false positives. The method is suitable for the in vitro detection of proteins that interact not only with the bait protein, but also with DNA, RNA and chemical compounds. Thus, it could become a universal approach for exploring the large space of protein sequences and interactome networks. PMID:23056904
Agnelli, Luca; Tassone, Pierfrancesco; Neri, Antonino
2013-06-01
Multiple myeloma is a fatal malignant proliferation of clonal bone marrow Ig-secreting plasma cells, characterized by wide clinical, biological, and molecular heterogeneity. Herein, global gene and microRNA expression, genome-wide DNA profilings, and next-generation sequencing technology used to investigate the genomic alterations underlying the bio-clinical heterogeneity in multiple myeloma are discussed. High-throughput technologies have undoubtedly allowed a better comprehension of the molecular basis of the disease, a fine stratification, and early identification of high-risk patients, and have provided insights toward targeted therapy studies. However, such technologies are at risk of being affected by laboratory- or cohort-specific biases, and are moreover influenced by high number of expected false positives. This aspect has a major weight in myeloma, which is characterized by large molecular heterogeneity. Therefore, meta-analysis as well as multiple approaches are desirable if not mandatory to validate the results obtained, in line with commonly accepted recommendation for tumor diagnostic/prognostic biomarker studies.
Looking for Childhood Schizophrenia: Case Series of False Positives.
ERIC Educational Resources Information Center
Stayer, Catherine; Sporn, Alexandra; Gogtay, Nitin; Tossell, Julia; Lenane, Marge; Gochman, Peter; Rapoport, Judith L.
2004-01-01
Extensive experience with the diagnosis of childhood-onset schizophrenia indicates a high rate of false positives. Most mislabeled patients have chronic disabling, affective, or behavioral disorders. The authors report the cases of three children who passed stringent initial childhood-onset schizophrenia "screens" but had no chronic psychotic…
Noh, Jaekwang; Ko, Hak Hyun; Yun, Yeomin; Choi, Young Sook; Lee, Sang Gon; Shin, Sue; Han, Kyou Sup; Song, Eun Young
2008-08-01
We evaluated the performance and false positive rate of Mediace RPR test (Sekisui, Japan), a newly introduced nontreponemal test using a chemistry autoanalyzer. The sensitivity of Mediace RPR test was analyzed using sera from 50 patients with syphilis in different stages (8 primary, 7 secondary, and 35 latent), 14 sera positive with fluorescent treponemal antibody absorption (FTA-ABS) IgM, and 74 sera positive with conventional rapid plasma regain (RPR) card test (Asan, Korea) and also positive with Treponema pallidum hemagglutination (TPHA) test or FTA-ABS IgG test. The specificity was analyzed on 108 healthy blood donors. We also performed RPR card test on 302 sera that had been tested positive with Mediace RPR test and also performed TPHA or FTA-ABS IgG test to analyze the false positive rate of Mediace RPR test. A cutoff value of 0.5 R.U. (RPR unit) was used for Mediace RPR test. Mediace RPR test on syphilitic sera of different stages (primary, secondary, and latent stages) and FTA-ABS IgM positive sera showed a sensitivity of 100%, 100%, 82.9% and 100%, respectively. Among the 74 sera positive with conventional RPR card test and TPHA or FTA-ABS IgG test, 55 were positive with Mediace test. The specificity of Mediace RPR test on blood donors was 97.2%. Among the 302 sera positive with Mediace RPR test, 137 sera (45.4%) were negative by RPR card and TPHA/FTA-ABS IgG tests. Although the sensitivities of Mediace RPR were good for primary and secondary syphilis, due to its high negative rate of Mediace RPR over the conventional RPR positive samples, further studies are necessary whether it can replace conventional nontreponemal test for screening purpose. Moreover, in view of the high false positive rate, positive results by Mediace RPR test should be confirmed with treponemal tests.
The EB factory project. II. Validation with the Kepler field in preparation for K2 and TESS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parvizi, Mahmoud; Paegert, Martin; Stassun, Keivan G., E-mail: mahmoud.parvizi@vanderbilt.edu
Large repositories of high precision light curve data, such as the Kepler data set, provide the opportunity to identify astrophysically important eclipsing binary (EB) systems in large quantities. However, the rate of classical “by eye” human analysis restricts complete and efficient mining of EBs from these data using classical techniques. To prepare for mining EBs from the upcoming K2 mission as well as other current missions, we developed an automated end-to-end computational pipeline—the Eclipsing Binary Factory (EBF)—that automatically identifies EBs and classifies them into morphological types. The EBF has been previously tested on ground-based light curves. To assess the performancemore » of the EBF in the context of space-based data, we apply the EBF to the full set of light curves in the Kepler “Q3” Data Release. We compare the EBs identified from this automated approach against the human generated Kepler EB Catalog of ∼2600 EBs. When we require EB classification with ⩾90% confidence, we find that the EBF correctly identifies and classifies eclipsing contact (EC), eclipsing semi-detached (ESD), and eclipsing detached (ED) systems with a false positive rate of only 4%, 4%, and 8%, while complete to 64%, 46%, and 32%, respectively. When classification confidence is relaxed, the EBF identifies and classifies ECs, ESDs, and EDs with a slightly higher false positive rate of 6%, 16%, and 8%, while much more complete to 86%, 74%, and 62%, respectively. Through our processing of the entire Kepler “Q3” data set, we also identify 68 new candidate EBs that may have been missed by the human generated Kepler EB Catalog. We discuss the EBF's potential application to light curve classification for periodic variable stars more generally for current and upcoming surveys like K2 and the Transiting Exoplanet Survey Satellite.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marques da Silva, A; Narciso, L
Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in themore » liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve limitations in the segmentation step, mainly when background region has higher activity compared to kidneys. Financial support by CAPES.« less
The Eb Factory Project. Ii. Validation With the Kepler Field in Preparation for K2 and Tess
NASA Astrophysics Data System (ADS)
Parvizi, Mahmoud; Paegert, Martin; Stassun, Keivan G.
2014-12-01
Large repositories of high precision light curve data, such as the Kepler data set, provide the opportunity to identify astrophysically important eclipsing binary (EB) systems in large quantities. However, the rate of classical “by eye” human analysis restricts complete and efficient mining of EBs from these data using classical techniques. To prepare for mining EBs from the upcoming K2 mission as well as other current missions, we developed an automated end-to-end computational pipeline—the Eclipsing Binary Factory (EBF)—that automatically identifies EBs and classifies them into morphological types. The EBF has been previously tested on ground-based light curves. To assess the performance of the EBF in the context of space-based data, we apply the EBF to the full set of light curves in the Kepler “Q3” Data Release. We compare the EBs identified from this automated approach against the human generated Kepler EB Catalog of ˜ 2600 EBs. When we require EB classification with ≥slant 90% confidence, we find that the EBF correctly identifies and classifies eclipsing contact (EC), eclipsing semi-detached (ESD), and eclipsing detached (ED) systems with a false positive rate of only 4%, 4%, and 8%, while complete to 64%, 46%, and 32%, respectively. When classification confidence is relaxed, the EBF identifies and classifies ECs, ESDs, and EDs with a slightly higher false positive rate of 6%, 16%, and 8%, while much more complete to 86%, 74%, and 62%, respectively. Through our processing of the entire Kepler “Q3” data set, we also identify 68 new candidate EBs that may have been missed by the human generated Kepler EB Catalog. We discuss the EBF's potential application to light curve classification for periodic variable stars more generally for current and upcoming surveys like K2 and the Transiting Exoplanet Survey Satellite.
49 CFR 173.168 - Chemical oxygen generators.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Chemical oxygen generators. 173.168 Section 173... Class 7 § 173.168 Chemical oxygen generators. An oxygen generator, chemical (defined in § 171.8 of this subchapter) may be transported only under the following conditions: (a) Approval. A chemical oxygen generator...
49 CFR 173.168 - Chemical oxygen generators.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Chemical oxygen generators. 173.168 Section 173... Class 7 § 173.168 Chemical oxygen generators. An oxygen generator, chemical (defined in § 171.8 of this subchapter) may be transported only under the following conditions: (a) Approval. A chemical oxygen generator...
40 CFR 271.10 - Requirements for generators of hazardous wastes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Requirements for generators of... for Final Authorization § 271.10 Requirements for generators of hazardous wastes. (a) The State program must cover all generators covered by 40 CFR part 262. States must require new generators to...
14 CFR 25.1450 - Chemical oxygen generators.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Chemical oxygen generators. 25.1450 Section... oxygen generators. (a) For the purpose of this section, a chemical oxygen generator is defined as a device which produces oxygen by chemical reaction. (b) Each chemical oxygen generator must be designed...
49 CFR 173.168 - Chemical oxygen generators.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 2 2013-10-01 2013-10-01 false Chemical oxygen generators. 173.168 Section 173... Class 7 § 173.168 Chemical oxygen generators. An oxygen generator, chemical (defined in § 171.8 of this subchapter) may be transported only under the following conditions: (a) Approval. A chemical oxygen generator...
14 CFR 25.1450 - Chemical oxygen generators.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Chemical oxygen generators. 25.1450 Section... oxygen generators. (a) For the purpose of this section, a chemical oxygen generator is defined as a device which produces oxygen by chemical reaction. (b) Each chemical oxygen generator must be designed...
49 CFR 173.168 - Chemical oxygen generators.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 2 2012-10-01 2012-10-01 false Chemical oxygen generators. 173.168 Section 173... Class 7 § 173.168 Chemical oxygen generators. An oxygen generator, chemical (defined in § 171.8 of this subchapter) may be transported only under the following conditions: (a) Approval. A chemical oxygen generator...
49 CFR 173.168 - Chemical oxygen generators.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Chemical oxygen generators. 173.168 Section 173... Class 7 § 173.168 Chemical oxygen generators. An oxygen generator, chemical (defined in § 171.8 of this subchapter) may be transported only under the following conditions: (a) Approval. A chemical oxygen generator...
14 CFR 25.1450 - Chemical oxygen generators.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Chemical oxygen generators. 25.1450 Section... oxygen generators. (a) For the purpose of this section, a chemical oxygen generator is defined as a device which produces oxygen by chemical reaction. (b) Each chemical oxygen generator must be designed...
21 CFR 870.3610 - Implantable pacemaker pulse generator.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Implantable pacemaker pulse generator. 870.3610... pacemaker pulse generator. (a) Identification. An implantable pacemaker pulse generator is a device that has... implantable pacemaker pulse generator device that was in commercial distribution before May 28, 1976, or that...
21 CFR 870.3610 - Implantable pacemaker pulse generator.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Implantable pacemaker pulse generator. 870.3610... pacemaker pulse generator. (a) Identification. An implantable pacemaker pulse generator is a device that has... implantable pacemaker pulse generator device that was in commercial distribution before May 28, 1976, or that...
Kwon, Yong Hwan; Kim, Nayoung; Lee, Ju Yup; Choi, Yoon Jin; Yoon, Kichul; Yoon, Hyuk; Shin, Cheol Min; Park, Young Soo; Lee, Dong Ho
2014-01-01
Background: This study was conducted to evaluate the diagnostic validity of the 13C-urea breath test (13C-UBT) in the remnant stomach after partial gastrectomy for gastric cancer. Methods: The 13C-UBT results after Helicobacter pylori eradication therapy was compared with the results of endoscopic biopsy-based methods in the patients who have received partial gastrectomy for the gastric cancer. Results: Among the gastrectomized patients who showed the positive 13C-UBT results (≥ 2.5‰, n = 47) and negative 13C-UBT results (< 2.5‰, n = 114) after H. pylori eradication, 26 patients (16.1%) and 4 patients (2.5%) were found to show false positive and false negative results based on biopsy-based methods, respectively. The sensitivity, specificity, false positive rate, and false negative rate for the cut-off value of 2.5‰ were 84.0%, 80.9%, 19.1%, and 16.0%, respectively. The positive and negative predictive values were 44.7% and 96.5%, respectively. In the multivariate analysis, two or more H. pylori eradication therapies (odds ratio = 3.248, 95% confidence interval= 1.088–9.695, P = 0.035) was associated with a false positive result of the 13C-UBT. Conclusions: After partial gastrectomy, a discordant result was shown in the positive 13C-UBT results compared to the endoscopic biopsy methods for confirming the H. pylori status after eradication. Additional endoscopic biopsy-based H. pylori tests would be helpful to avoid unnecessary treatment for H. pylori eradication in these cases. PMID:25574466
46 CFR 111.12-11 - Generator protection.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Generator protection. 111.12-11 Section 111.12-11 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-11 Generator protection. (a...
46 CFR 111.12-11 - Generator protection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Generator protection. 111.12-11 Section 111.12-11 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-11 Generator protection. (a...
46 CFR 111.12-11 - Generator protection.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Generator protection. 111.12-11 Section 111.12-11 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-11 Generator protection. (a...
21 CFR 882.4400 - Radiofrequency lesion generator.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Radiofrequency lesion generator. 882.4400 Section... (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Surgical Devices § 882.4400 Radiofrequency lesion generator. (a) Identification. A radiofrequency lesion generator is a device used to produce...
49 CFR 229.105 - Steam generator number.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Steam generator number. 229.105 Section 229.105....105 Steam generator number. An identification number shall be marked on the steam generator's separator and that number entered on FRA Form F 6180-49A. ...
49 CFR 229.105 - Steam generator number.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Steam generator number. 229.105 Section 229.105....105 Steam generator number. An identification number shall be marked on the steam generator's separator and that number entered on FRA Form F 6180-49A. ...
Haranosono, Yu; Kurata, Masaaki; Sakaki, Hideyuki
2014-08-01
One of the mechanisms of phototoxicity is photo-reaction, such as reactive oxygen species (ROS) generation following photo-absorption. We focused on ROS generation and photo-absorption as key-steps, because these key-steps are able to be described by photochemical properties, and these properties are dependent on chemical structure. Photo-reactivity of a compound is described by HOMO-LUMO Gap (HLG), generally. Herein, we showed that HLG can be used as a descriptor of the generation of reactive oxygen species. Moreover, the maximum-conjugated π electron number (PENMC), which we found as a descriptor of photo-absorption, could also predict in vitro phototoxicity. Each descriptor could predict in vitro phototoxicity with 70.0% concordance, but there was un-predicted area found (gray zone). Interestingly, some compounds in each gray zone were not common, indicating that the combination of two descriptors could improve prediction potential. We reset the cut-off lines to define positive zone, negative zone and gray zone for each descriptor. Thereby we overlapped HLG and PENMC in a graph, and divided the total area to nine zones with cut-off lines of each descriptor. The rules to prediction were decided to achieve the best concordance, and the concordances were improved up to 82.8% for self-validation, 81.6% for cross-validation. We found common properties among false positive or negative compounds, photo-reactive structure and photo-allergenic, respectively. In addition, our method could be adapted to compounds rich in structural diversity using only chemical structure without any statistical analysis and complicated calculation.
Development and Implementation of Metrics for Identifying Military Impulse Noise
2010-09-01
False Negative Rate FP False Positive FPR False Positive Rate FtC Fort Carson, CO GIS Geographic Information System GMM Gaussian mixture model Hz...60 70 80 90 100 110 Bin Number B in N um be r N um ber of D ata Points M apped to B in 14 Figure 8. Plot of typical neuron activation...signal metrics and waveform itself were saved and transmitted to the home base. There is also a provision to download the entire recorded waveform
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Hoskin, Robert; Hunter, Mike D; Woodruff, Peter W R
2014-11-01
Both psychological stress and predictive signals relating to expected sensory input are believed to influence perception, an influence which, when disrupted, may contribute to the generation of auditory hallucinations. The effect of stress and semantic expectation on auditory perception was therefore examined in healthy participants using an auditory signal detection task requiring the detection of speech from within white noise. Trait anxiety was found to predict the extent to which stress influenced response bias, resulting in more anxious participants adopting a more liberal criterion, and therefore experiencing more false positives, when under stress. While semantic expectation was found to increase sensitivity, its presence also generated a shift in response bias towards reporting a signal, suggesting that the erroneous perception of speech became more likely. These findings provide a potential cognitive mechanism that may explain the impact of stress on hallucination-proneness, by suggesting that stress has the tendency to alter response bias in highly anxious individuals. These results also provide support for the idea that top-down processes such as those relating to semantic expectation may contribute to the generation of auditory hallucinations. © 2013 The British Psychological Society.
McDougall, I R
1995-10-01
Whole-body scintigraphy with radioiodine-131 is an important diagnostic test in the management of patients with differentiated thyroid cancer who have undergone surgical treatment. The scan can demonstrate the presence of residual thyroid or functioning metastases in lymph nodes or distant sites. However, there are a number of potential pitfalls in the interpretation of this scan that could lead to a false-positive diagnosis of cancer. The scintiscans are presented for five patients in whom uptake outside of the thyroid was not due to functioning metastases. Some of these abnormalities are physiologic, such as uptake of iodine in the gastrointestinal tract. A comprehensive list of false-positive results are tabulated.
Chung, Shimin J; Krishnan, Prabha U; Leo, Yee Sin
2015-02-01
Early diagnosis of dengue has been made easier in recent years owing to the advancement in diagnostic technologies. The rapid non-structural protein 1 (NS1) test strip is widely used in many developed and developing regions at risk of dengue. Despite the relatively high specificity of this test, we recently encountered two cases of false-positive dengue NS1 antigen in patients with underlying hematological malignancies. We reviewed the literature for causes of false-positive dengue NS1. © The American Society of Tropical Medicine and Hygiene.
Rapid automated method for screening of enteric pathogens from stool specimens.
Villasante, P A; Agulla, A; Merino, F J; Pérez, T; Ladrón de Guevara, C; Velasco, A C
1987-01-01
A total of 800 colonies suggestive of Salmonella, Shigella, or Yersinia species isolated on stool differential agar media were inoculated onto both conventional biochemical test media (triple sugar iron agar, urea agar, and phenylalanine agar) and Entero Pathogen Screen cards of the AutoMicrobic system (Vitek Systems, Inc., Hazelwood, Mo.). Based on the conventional tests, the AutoMicrobic system method yielded the following results: 587 true-negatives, 185 true-positives, 2 false-negatives, and 26 false-positives (sensitivity, 99%; specificity, 96%). Both true-positive and true-negative results were achieved considerably earlier than false results (P less than 0.001). The Entero Pathogen Screen card method is a fast, easy, and sensitive method for screening for Salmonella, Shigella, or Yersinia species. The impossibility of screening for oxidase-positive pathogens is a minor disadvantage of this method. PMID:3553230
Sundar, Kaushik; Venkatasubramanian, Shankar; Shanmugam, Sundar; Arthur, Preetam; Subbaraya, Ramakrishnan; Hazeena, Philo
2017-10-15
Acute flaccid paralysis is a neuromuscular emergency characterized by rapidly worsening weakness that evolves quickly to cause diaphragmatic failure. The challenge for the treating physician is to stabilize the patient, generate the differential diagnosis and determine the management; all in quick time. Neurotoxic snake bites have inadequate signs of inflammation and are easily missed. Myasthenic crisis, on the other hand, could be the first sign of myasthenia gravis in up to 20% of patients. Both present with acute respiratory failure and inadequate history. Two of our patients presented with similar clinical picture, and received polyvalent anti-snake venom obtained from hyperimmunised horses (Equus caballus). Both tested positive for anti-acetyl choline receptor antibody. After recovery, both patients narrated a history suggestive of neurotoxic envenomation. We later discovered that patients, who are exposed to polyvalent anti-snake venom (Equus caballus) prior to radioimmunoassay, demonstrate high titers of Anti-AChR Ab in their serum erroneously. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu
2015-03-01
In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.
Risk management and precaution: insights on the cautious use of evidence.
Hrudey, Steve E; Leiss, William
2003-01-01
Risk management, done well, should be inherently precautionary. Adopting an appropriate degree of precaution with respect to feared health and environmental hazards is fundamental to risk management. The real problem is in deciding how precautionary to be in the face of inevitable uncertainties, demanding that we understand the equally inevitable false positives and false negatives from screening evidence. We consider a framework for detection and judgment of evidence of well-characterized hazards, using the concepts of sensitivity, specificity, positive predictive value, and negative predictive value that are well established for medical diagnosis. Our confidence in predicting the likelihood of a true danger inevitably will be poor for rare hazards because of the predominance of false positives; failing to detect a true danger is less likely because false negatives must be rarer than the danger itself. Because most controversial environmental hazards arise infrequently, this truth poses a dilemma for risk management. PMID:14527835
Emotion and false memory: How goal-irrelevance can be relevant for what people remember.
Van Damme, Ilse; Kaplan, Robin L; Levine, Linda J; Loftus, Elizabeth F
2017-02-01
Elaborating on misleading information concerning emotional events can lead people to form false memories. The present experiment compared participants' susceptibility to false memories when they elaborated on information associated with positive versus negative emotion and pregoal versus postgoal emotion. Pregoal emotion reflects appraisals that goal attainment or failure is anticipated but has not yet occurred (e.g., hope and fear). Postgoal emotion reflects appraisals that goal attainment or failure has already occurred (e.g., happiness and devastation). Participants watched a slideshow depicting an interaction between a couple and were asked to empathise with the protagonist's feelings of hope (positive pregoal), happiness (positive postgoal), fear (negative pregoal), or devastation (negative postgoal); in control conditions, no emotion was mentioned. Participants were then asked to reflect on details of the interaction that had occurred (true) or had not occurred (false), and that were relevant or irrelevant to the protagonist's goal. Irrespective of emotional valence, participants in the pregoal conditions were more susceptible to false memories concerning goal-irrelevant details than were participants in the other conditions. These findings support the view that pregoal emotions narrow attention to information relevant to goal pursuit, increasing susceptibility to false memories for irrelevant information.
Predicting the carcinogenicity of chemicals with alternative approaches: recent advances.
Benigni, Romualdo
2014-09-01
Alternative approaches to the rodent bioassay are necessary for early identification of problematic drugs and biocides during the development process, and are the only practicable tool for assessing environmental chemicals with no or adequate safety documentation. This review informs on: i) the traditional prescreening through genotoxicity testing; ii) an integrative approach that assesses DNA-reactivity and ability to disorganize tissues; iii) new applications of omics technologies (ToxCast/Tox21 project); iv) a pragmatic approach aimed at filling data gaps by intrapolating/extrapolating from similar chemicals (read-across, category formation). The review also approaches the issue of the concerns about false-positive and false-negative results that prevents a wider acceptance and use of alternatives. The review addresses strengths and limitations of various proposals, and concludes on the need of differential approaches to the issue of false negatives and false positives. False negatives can be eliminated or reduced below the variability of the animal assay with conservative quantitative structure-activity relationships or in vitro tests; false positives can be cleared with ad hoc mechanistically based follow-ups. This framework can permit a reduction of animal testing and a better protection of human health.
The local lymph node assay being too sensitive?
Hans-Werner, Vohr; Jürgen, Ahr Hans
2005-12-01
The local lymph node assay (LLNA) and modifications thereof were recently recognized by the OECD as stand-alone methods for the detection of skin-sensitizing potential. However, although the validity of the LLNA was acknowledged by the ICCVAM, attention was drawn to one major problem, i.e., the possibility of false positive results caused by non-specific cell activation as a result of inflammatory processes in the skin (irritation). This is based on the fact that inflammatory processes in the skin may lead to non-specific activation of dendritic cells, cell migration and non-specific proliferation of lymph node cells. Measuring cell proliferation by radioactive or non-radioactive methods, without taking the irritating properties of test items into account, leads thus to false positive reactions. In this paper, we have compared both endpoints: (1) cell proliferation alone and (2) cell proliferation in combination with inflammatory (irritating) processes. It turned out that a considerable number of tests were "false positive" to the definition mentioned above. By excluding such false positive results the LLNA seems not to be more sensitive than relevant guinea pig assays. These various methods and results are described here.
Analyzing false positives of four questions in the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-06-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.
46 CFR 111.12-13 - Propulsion generator protection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Propulsion generator protection. 111.12-13 Section 111.12-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-13 Propulsion generator...
46 CFR 111.12-13 - Propulsion generator protection.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Propulsion generator protection. 111.12-13 Section 111.12-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-13 Propulsion generator...
46 CFR 111.12-13 - Propulsion generator protection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Propulsion generator protection. 111.12-13 Section 111.12-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-13 Propulsion generator...
46 CFR 111.12-13 - Propulsion generator protection.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Propulsion generator protection. 111.12-13 Section 111.12-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-13 Propulsion generator...
46 CFR 111.12-13 - Propulsion generator protection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Propulsion generator protection. 111.12-13 Section 111.12-13 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-13 Propulsion generator...
46 CFR 129.320 - Generators and motors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Generators and motors. 129.320 Section 129.320 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS ELECTRICAL INSTALLATIONS Power Sources and Distribution Systems § 129.320 Generators and motors. (a) Each generator and...
46 CFR 129.320 - Generators and motors.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Generators and motors. 129.320 Section 129.320 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS ELECTRICAL INSTALLATIONS Power Sources and Distribution Systems § 129.320 Generators and motors. (a) Each generator and...
46 CFR 129.320 - Generators and motors.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Generators and motors. 129.320 Section 129.320 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS ELECTRICAL INSTALLATIONS Power Sources and Distribution Systems § 129.320 Generators and motors. (a) Each generator and...
21 CFR 870.3640 - Indirect pacemaker generator function analyzer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Indirect pacemaker generator function analyzer... Indirect pacemaker generator function analyzer. (a) Identification. An indirect pacemaker generator function analyzer is an electrically powered device that is used to determine pacemaker function or...
21 CFR 870.3640 - Indirect pacemaker generator function analyzer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Indirect pacemaker generator function analyzer... Indirect pacemaker generator function analyzer. (a) Identification. An indirect pacemaker generator function analyzer is an electrically powered device that is used to determine pacemaker function or...
21 CFR 870.3630 - Pacemaker generator function analyzer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Pacemaker generator function analyzer. 870.3630... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3630 Pacemaker generator function analyzer. (a) Identification. A pacemaker generator function analyzer is a device that is...
21 CFR 870.3640 - Indirect pacemaker generator function analyzer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Indirect pacemaker generator function analyzer... Indirect pacemaker generator function analyzer. (a) Identification. An indirect pacemaker generator function analyzer is an electrically powered device that is used to determine pacemaker function or...
21 CFR 870.3640 - Indirect pacemaker generator function analyzer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Indirect pacemaker generator function analyzer... Indirect pacemaker generator function analyzer. (a) Identification. An indirect pacemaker generator function analyzer is an electrically powered device that is used to determine pacemaker function or...
21 CFR 870.3630 - Pacemaker generator function analyzer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Pacemaker generator function analyzer. 870.3630... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3630 Pacemaker generator function analyzer. (a) Identification. A pacemaker generator function analyzer is a device that is...
21 CFR 870.3630 - Pacemaker generator function analyzer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Pacemaker generator function analyzer. 870.3630... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3630 Pacemaker generator function analyzer. (a) Identification. A pacemaker generator function analyzer is a device that is...
21 CFR 870.3630 - Pacemaker generator function analyzer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Pacemaker generator function analyzer. 870.3630... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3630 Pacemaker generator function analyzer. (a) Identification. A pacemaker generator function analyzer is a device that is...
Walitt, Brian; Mackey, Rachel; Kuller, Lewis; Deane, Kevin D; Robinson, William; Holers, V Michael; Chang, Yue-Fang; Moreland, Larry
2013-05-01
Rheumatoid arthritis (RA) research using large databases is limited by insufficient case validity. Of 161,808 postmenopausal women in the Women's Health Initiative, 15,691 (10.2%) reported having RA, far higher than the expected 1% population prevalence. Since chart review for confirmation of an RA diagnosis is impractical in large cohort studies, the current study (2009-2011) tested the ability of baseline serum measurements of rheumatoid factor and anti-cyclic citrullinated peptide antibodies, second-generation assay (anti-CCP2), to identify physician-validated RA among the chart-review study participants with self-reported RA (n = 286). Anti-CCP2 positivity had the highest positive predictive value (PPV) (80.0%), and rheumatoid factor positivity the lowest (44.6%). Together, use of disease-modifying antirheumatic drugs and anti-CCP2 positivity increased PPV to 100% but excluded all seronegative cases (approximately 15% of all RA cases). Case definitions inclusive of seronegative cases had PPVs between 59.6% and 63.6%. False-negative results were minimized in these test definitions, as evidenced by negative predictive values of approximately 90%. Serological measurements, particularly measurement of anti-CCP2, improved the test characteristics of RA case definitions in the Women's Health Initiative.
Variable Cycle Intake for Reverse Core Engine
NASA Technical Reports Server (NTRS)
Chandler, Jesse M (Inventor); Staubach, Joseph B (Inventor); Suciu, Gabriel L (Inventor)
2016-01-01
A gas generator for a reverse core engine propulsion system has a variable cycle intake for the gas generator, which variable cycle intake includes a duct system. The duct system is configured for being selectively disposed in a first position and a second position, wherein free stream air is fed to the gas generator when in the first position, and fan stream air is fed to the gas generator when in the second position.
2012-01-01
Background RNA sequencing (RNA-Seq) has emerged as a powerful approach for the detection of differential gene expression with both high-throughput and high resolution capabilities possible depending upon the experimental design chosen. Multiplex experimental designs are now readily available, these can be utilised to increase the numbers of samples or replicates profiled at the cost of decreased sequencing depth generated per sample. These strategies impact on the power of the approach to accurately identify differential expression. This study presents a detailed analysis of the power to detect differential expression in a range of scenarios including simulated null and differential expression distributions with varying numbers of biological or technical replicates, sequencing depths and analysis methods. Results Differential and non-differential expression datasets were simulated using a combination of negative binomial and exponential distributions derived from real RNA-Seq data. These datasets were used to evaluate the performance of three commonly used differential expression analysis algorithms and to quantify the changes in power with respect to true and false positive rates when simulating variations in sequencing depth, biological replication and multiplex experimental design choices. Conclusions This work quantitatively explores comparisons between contemporary analysis tools and experimental design choices for the detection of differential expression using RNA-Seq. We found that the DESeq algorithm performs more conservatively than edgeR and NBPSeq. With regard to testing of various experimental designs, this work strongly suggests that greater power is gained through the use of biological replicates relative to library (technical) replicates and sequencing depth. Strikingly, sequencing depth could be reduced as low as 15% without substantial impacts on false positive or true positive rates. PMID:22985019
Robles, José A; Qureshi, Sumaira E; Stephen, Stuart J; Wilson, Susan R; Burden, Conrad J; Taylor, Jennifer M
2012-09-17
RNA sequencing (RNA-Seq) has emerged as a powerful approach for the detection of differential gene expression with both high-throughput and high resolution capabilities possible depending upon the experimental design chosen. Multiplex experimental designs are now readily available, these can be utilised to increase the numbers of samples or replicates profiled at the cost of decreased sequencing depth generated per sample. These strategies impact on the power of the approach to accurately identify differential expression. This study presents a detailed analysis of the power to detect differential expression in a range of scenarios including simulated null and differential expression distributions with varying numbers of biological or technical replicates, sequencing depths and analysis methods. Differential and non-differential expression datasets were simulated using a combination of negative binomial and exponential distributions derived from real RNA-Seq data. These datasets were used to evaluate the performance of three commonly used differential expression analysis algorithms and to quantify the changes in power with respect to true and false positive rates when simulating variations in sequencing depth, biological replication and multiplex experimental design choices. This work quantitatively explores comparisons between contemporary analysis tools and experimental design choices for the detection of differential expression using RNA-Seq. We found that the DESeq algorithm performs more conservatively than edgeR and NBPSeq. With regard to testing of various experimental designs, this work strongly suggests that greater power is gained through the use of biological replicates relative to library (technical) replicates and sequencing depth. Strikingly, sequencing depth could be reduced as low as 15% without substantial impacts on false positive or true positive rates.
Ireno, Ivanildce C; Baumann, Cindy; Stöber, Regina; Hengstler, Jan G; Wiesmüller, Lisa
2014-05-01
In vitro genotoxicity tests are known to suffer from several shortcomings, mammalian cell-based assays, in particular, from low specificities. Following a novel concept of genotoxicity detection, we developed a fluorescence-based method in living human cells. The assay quantifies DNA recombination events triggered by DNA double-strand breaks and damage-induced replication fork stalling predicted to detect a broad spectrum of genotoxic modes of action. To maximize sensitivities, we engineered a DNA substrate encompassing a chemoresponsive element from the human genome. Using this substrate, we screened various human tumor and non-transformed cell types differing in the DNA damage response, which revealed that detection of genotoxic carcinogens was independent of the p53 status but abrogated by apoptosis. Cell types enabling robust and sensitive genotoxicity detection were selected for the generation of reporter clones with chromosomally integrated DNA recombination substrate. Reporter cell lines were scrutinized with 21 compounds, stratified into five sets according to the established categories for identification of carcinogenic compounds: genotoxic carcinogens ("true positives"), non-genotoxic carcinogens, compounds without genotoxic or carcinogenic effect ("true negatives") and non-carcinogenic compounds, which have been reported to induce chromosomal aberrations or mutations in mammalian cell-based assays ("false positives"). Our results document detection of genotoxic carcinogens in independent cell clones and at levels of cellular toxicities <60 % with a sensitivity of >85 %, specificity of ≥90 % and detection of false-positive compounds <17 %. Importantly, through testing cyclophosphamide in combination with primary hepatocyte cultures, we additionally provide proof-of-concept for the identification of carcinogens requiring metabolic activation using this novel assay system.
Meinhardt, Sarah; Swint-Kruse, Liskin
2008-12-01
In protein families, conserved residues often contribute to a common general function, such as DNA-binding. However, unique attributes for each homolog (e.g. recognition of alternative DNA sequences) must arise from variation in other functionally-important positions. The locations of these "specificity determinant" positions are obscured amongst the background of varied residues that do not make significant contributions to either structure or function. To isolate specificity determinants, a number of bioinformatics algorithms have been developed. When applied to the LacI/GalR family of transcription regulators, several specificity determinants are predicted in the 18 amino acids that link the DNA-binding and regulatory domains. However, results from alternative algorithms are only in partial agreement with each other. Here, we experimentally evaluate these predictions using an engineered repressor comprising the LacI DNA-binding domain, the LacI linker, and the GalR regulatory domain (LLhG). "Wild-type" LLhG has altered DNA specificity and weaker lacO(1) repression compared to LacI or a similar LacI:PurR chimera. Next, predictions of linker specificity determinants were tested, using amino acid substitution and in vivo repression assays to assess functional change. In LLhG, all predicted sites are specificity determinants, as well as three sites not predicted by any algorithm. Strategies are suggested for diminishing the number of false negative predictions. Finally, individual substitutions at LLhG specificity determinants exhibited a broad range of functional changes that are not predicted by bioinformatics algorithms. Results suggest that some variants have altered affinity for DNA, some have altered allosteric response, and some appear to have changed specificity for alternative DNA ligands.
Koch, Hèlen; van Bokhoven, Marloes A; ter Riet, Gerben; van Alphen-Jager, Jm Tineke; van der Weijden, Trudy; Dinant, Geert-Jan; Bindels, Patrick J E
2009-04-01
Unexplained fatigue is frequently encountered in general practice. Because of the low prior probability of underlying somatic pathology, the positive predictive value of abnormal (blood) test results is limited in such patients. The study objectives were to investigate the relationship between established diagnoses and the occurrence of abnormal blood test results among patients with unexplained fatigue; to survey the effects of the postponement of test ordering on this relationship; and to explore consultation-related determinants of abnormal test results. Cluster randomised trial. General practices of 91 GPs in the Netherlands. GPs were randomised to immediate or postponed blood-test ordering. Patients with new unexplained fatigue were included. Limited and expanded sets of blood tests were ordered either immediately or after 4 weeks. Diagnoses during the 1-year follow-up period were extracted from medical records. Two-by-two tables were generated. To establish independent determinants of abnormal test results, a multivariate logistic regression model was used. Data of 325 patients were analysed (71% women; mean age 41 years). Eight per cent of patients had a somatic illness that was detectable by blood-test ordering. The number of false-positive test results increased in particular in the expanded test set. Patients rarely re-consulted after 4 weeks. Test postponement did not affect the distribution of patients over the two-by-two tables. No independent consultation-related determinants of abnormal test results were found. Results support restricting the number of tests ordered because of the increased risk of false-positive test results from expanded test sets. Although the number of re-consulting patients was small, the data do not refute the advice to postpone blood-test ordering for medical reasons in patients with unexplained fatigue in general practice.