Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Linguistic Determinants of the Difficulty of True-False Test Items
ERIC Educational Resources Information Center
Peterson, Candida C.; Peterson, James L.
1976-01-01
Adults read a prose passage and responded to passages based on it which were either true or false and were phrased either affirmatively or negatively. True negatives yielded most errors, followed in order by false negatives, true affirmatives, and false affirmatives. (Author/RC)
Fairfield, Beth; Mammarella, Nicola; Di Domenico, Alberto; D'Aurora, Marco; Stuppia, Liborio; Gatta, Valentina
2017-08-30
False memories are common memory distortions in everyday life and seem to increase with affectively connoted complex information. In line with recent studies showing a significant interaction between the noradrenergic system and emotional memory, we investigated whether healthy volunteer carriers of the deletion variant of the ADRA2B gene that codes for the α2b-adrenergic receptor are more prone to false memories than non-carriers. In this study, we collected genotype data from 212 healthy female volunteers; 91 ADRA2B carriers and 121 non-carriers. To assess gene effects on false memories for affective information, factorial mixed model analysis of variances (ANOVAs) were conducted with genotype as the between-subjects factor and type of memory error as the within-subjects factor. We found that although carriers and non-carriers made comparable numbers of false memory errors, they showed differences in the direction of valence biases, especially for inferential causal errors. Specifically, carriers produced fewer causal false memory errors for scripts with a negative outcome, whereas non-carriers showed a more general emotional effect and made fewer causal errors with both positive and negative outcomes. These findings suggest that putatively higher levels of noradrenaline in deletion carriers may enhance short-term consolidation of negative information and lead to fewer memory distortions when facing negative events. Copyright © 2017 Elsevier B.V. All rights reserved.
Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G
2018-03-01
To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.
Paige F.B. Ferguson; Michael J. Conroy; Jeffrey Hepinstall-Cymerman; Nigel Yoccoz
2015-01-01
False positive detections, such as species misidentifications, occur in ecological data, although many models do not account for them. Consequently, these models are expected to generate biased inference.The main challenge in an analysis of data with false positives is to distinguish false positive and false negative...
Influence of ECG measurement accuracy on ECG diagnostic statements.
Zywietz, C; Celikag, D; Joseph, G
1996-01-01
Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.
Accuracy and reliability of forensic latent fingerprint decisions
Ulery, Bradford T.; Hicklin, R. Austin; Buscaglia, JoAnn; Roberts, Maria Antonia
2011-01-01
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion. PMID:21518906
Accuracy and reliability of forensic latent fingerprint decisions.
Ulery, Bradford T; Hicklin, R Austin; Buscaglia, Joann; Roberts, Maria Antonia
2011-05-10
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners' decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners' decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
False Memories for Affective Information in Schizophrenia.
Fairfield, Beth; Altamura, Mario; Padalino, Flavia A; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola
2016-01-01
Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls ( p < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories ( p > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved.
False Memories for Affective Information in Schizophrenia
Fairfield, Beth; Altamura, Mario; Padalino, Flavia A.; Balzotti, Angela; Di Domenico, Alberto; Mammarella, Nicola
2016-01-01
Studies have shown a direct link between memory for emotionally salient experiences and false memories. In particular, emotionally arousing material of negative and positive valence enhanced reality monitoring compared to neutral material since emotional stimuli can be encoded with more contextual details and thereby facilitate the distinction between presented and imagined stimuli. Individuals with schizophrenia appear to be impaired in both reality monitoring and memory for emotional experiences. However, the relationship between the emotionality of the to-be-remembered material and false memory occurrence has not yet been studied. In this study, 24 patients and 24 healthy adults completed a false memory task with everyday episodes composed of 12 photographs that depicted positive, negative, or neutral outcomes. Results showed how patients with schizophrenia made a higher number of false memories than normal controls (p < 0.05) when remembering episodes with positive or negative outcomes. The effect of valence was apparent in the patient group. For example, it did not affect the production causal false memories (p > 0.05) resulting from erroneous inferences but did interact with plausible, script consistent errors in patients (i.e., neutral episodes yielded a higher degree of errors than positive and negative episodes). Affective information reduces the probability of generating causal errors in healthy adults but not in patients suggesting that emotional memory impairments may contribute to deficits in reality monitoring in schizophrenia when affective information is involved. PMID:27965600
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
Mirandola, Chiara; Toffalini, Enrico; Grassano, Massimo; Cornoldi, Cesare; Melinder, Annika
2014-01-01
The present experiment was conducted to investigate whether negative emotionally charged and arousing content of to-be-remembered scripted material would affect propensity towards memory distortions. We further investigated whether elaboration of the studied material through free recall would affect the magnitude of memory errors. In this study participants saw eight scripts. Each of the scripts included an effect of an action, the cause of which was not presented. Effects were either negatively emotional or neutral. Participants were assigned to either a yes/no recognition test group (recognition), or to a recall and yes/no recognition test group (elaboration + recognition). Results showed that participants in the recognition group produced fewer memory errors in the emotional condition. Conversely, elaboration + recognition participants had lower accuracy and produced more emotional memory errors than the other group, suggesting a mediating role of semantic elaboration on the generation of false memories. The role of emotions and semantic elaboration on the generation of false memories is discussed.
Cognitive errors: thinking clearly when it could be child maltreatment.
Laskey, Antoinette L
2014-10-01
Cognitive errors have been studied in a broad array of fields, including medicine. The more that is understood about how the human mind processes complex information, the more it becomes clear that certain situations are particularly susceptible to less than optimal outcomes because of these errors. This article explores how some of the known cognitive errors may influence the diagnosis of child abuse, resulting in both false-negative and false-positive diagnoses. Suggested remedies for these errors are offered. Copyright © 2014 Elsevier Inc. All rights reserved.
Franson, J.C.; Hohman, W.L.; Moore, J.L.; Smith, M.R.
1996-01-01
We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of ≥0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
How does negative emotion cause false memories?
Brainerd, C J; Stein, L M; Silveira, R A; Rohenkohl, G; Reyna, V F
2008-09-01
Remembering negative events can stimulate high levels of false memory, relative to remembering neutral events. In experiments in which the emotional valence of encoded materials was manipulated with their arousal levels controlled, valence produced a continuum of memory falsification. Falsification was highest for negative materials, intermediate for neutral materials, and lowest for positive materials. Conjoint-recognition analysis produced a simple process-level explanation: As one progresses from positive to neutral to negative valence, false memory increases because (a) the perceived meaning resemblance between false and true items increases and (b) subjects are less able to use verbatim memories of true items to suppress errors.
Positive events protect children from causal false memories for scripted events.
Melinder, Annika; Toffalini, Enrico; Geccherle, Eleonora; Cornoldi, Cesare
2017-11-01
Adults produce fewer inferential false memories for scripted events when their conclusions are emotionally charged than when they are neutral, but it is not clear whether the same effect is also found in children. In the present study, we examined this issue in a sample of 132 children aged 6-12 years (mean 9 years, 3 months). Participants encoded photographs depicting six script-like events that had a positively, negatively, or a neutral valenced ending. Subsequently, true and false recognition memory of photographs related to the observed scripts was tested as a function of emotionality. Causal errors-a type of false memory thought to stem from inferential processes-were found to be affected by valence: children made fewer causal errors for positive than for neutral or negative events. Hypotheses are proposed on why adults were found protected against inferential false memories not only by positive (as for children) but also by negative endings when administered similar versions of the same paradigm.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
Characterisation of false-positive observations in botanical surveys
2017-01-01
Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972
A comparison of acoustic montoring methods for common anurans of the northeastern United States
Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.
2016-01-01
Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.
Reducing false negatives in clinical practice: the role of neural network technology.
Mango, L J
1996-10-01
The fact that some cervical smears result in false-negative findings is an unavoidable and unpredictable consequence of the conventional (manual microscopic) method of screening. Errors in the detection and interpretation of abnormality are cited as leading causes of false-negative cytology findings; these are random errors that are not known to correlate with any patient risk factor, which makes the false-negative findings a "silent" threat that is difficult to prevent. Described by many as a labor-intensive procedure, the microscopic evaluation of a cervical smear involves a detailed search among hundreds of thousands of cells on each smear for a possible few that may indicate abnormality. Investigations into causes of false-negative findings preceding the discovery of high-grade lesions found that many smears had very few diagnostic cells that were often very small in size. These small cells were initially overlooked or misinterpreted and repeatedly missed on rescreening. PAPNET testing is designed to supplement conventional screening by detecting abnormal cells that initially may have been missed by microscopic examination. This interactive system uses neural networks, a type of artificial intelligence well suited for pattern recognition, to automate the arduous search for abnormality. The instrument focuses the review of suspicious cells by a trained cytologist. Clinical studies indicate that PAPNET testing is sensitive to abnormality typically missed by conventional screening and that its use as a supplemental test improves the accuracy of screening.
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.
2011-03-01
Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Follow-up of negative MRI-targeted prostate biopsies: when are we missing cancer?
Gold, Samuel A; Hale, Graham R; Bloom, Jonathan B; Smith, Clayton P; Rayn, Kareem N; Valera, Vladimir; Wood, Bradford J; Choyke, Peter L; Turkbey, Baris; Pinto, Peter A
2018-05-21
Multiparametric magnetic resonance imaging (mpMRI) has improved clinicians' ability to detect clinically significant prostate cancer (csPCa). Combining or fusing these images with the real-time imaging of transrectal ultrasound (TRUS) allows urologists to better sample lesions with a targeted biopsy (Tbx) leading to the detection of greater rates of csPCa and decreased rates of low-risk PCa. In this review, we evaluate the technical aspects of the mpMRI-guided Tbx procedure to identify possible sources of error and provide clinical context to a negative Tbx. A literature search was conducted of possible reasons for false-negative TBx. This includes discussion on false-positive mpMRI findings, termed "PCa mimics," that may incorrectly suggest high likelihood of csPCa as well as errors during Tbx resulting in inexact image fusion or biopsy needle placement. Despite the strong negative predictive value associated with Tbx, concerns of missed disease often remain, especially with MR-visible lesions. This raises questions about what to do next after a negative Tbx result. Potential sources of error can arise from each step in the targeted biopsy process ranging from "PCa mimics" or technical errors during mpMRI acquisition to failure to properly register MRI and TRUS images on a fusion biopsy platform to technical or anatomic limits on needle placement accuracy. A better understanding of these potential pitfalls in the mpMRI-guided Tbx procedure will aid interpretation of a negative Tbx, identify areas for improving technical proficiency, and improve both physician understanding of negative Tbx and patient-management options.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
An investigation into false-negative transthoracic fine needle aspiration and core biopsy specimens.
Minot, Douglas M; Gilman, Elizabeth A; Aubry, Marie-Christine; Voss, Jesse S; Van Epps, Sarah G; Tuve, Delores J; Sciallis, Andrew P; Henry, Michael R; Salomao, Diva R; Lee, Peter; Carlson, Stephanie K; Clayton, Amy C
2014-12-01
Transthoracic fine needle aspiration (TFNA)/core needle biopsy (CNB) under computed tomography (CT) guidance has proved useful in the assessment of pulmonary nodules. We sought to determine the TFNA false-negative (FN) rate at our institution and identify potential causes of FN diagnoses. Medical records were reviewed from 1,043 consecutive patients who underwent CT-guided TFNA with or without CNB of lung nodules over a 5-year time period (2003-2007). Thirty-seven FN cases of "negative" TFNA/CNB with malignant outcome were identified with 36 cases available for review, of which 35 had a corresponding CNB. Cases were reviewed independently (blinded to original diagnosis) by three pathologists with 15 age- and sex-matched positive and negative controls. Diagnosis (i.e., nondiagnostic, negative or positive for malignancy, atypical or suspicious) and qualitative assessments were recorded. Consensus diagnosis was suspicious or positive in 10 (28%) of 36 TFNA cases and suspicious in 1 (3%) of 35 CNB cases, indicating potential interpretive errors. Of the 11 interpretive errors (including both suspicious and positive cases), 8 were adenocarcinomas, 1 squamous cell carcinoma, 1 metastatic renal cell carcinoma, and 1 lymphoma. The remaining 25 FN cases (69.4%) were considered sampling errors and consisted of 7 adenocarcinomas, 3 nonsmall cell carcinomas, 3 lymphomas, 2 squamous cell carcinomas, and 2 renal cell carcinomas. Interpretive and sampling error cases were more likely to abut the pleura, while histopathologically, they tended to be necrotic and air-dried. The overall FN rate in this patient cohort is 3.5% (1.1% interpretive and 2.4% sampling errors). © 2014 Wiley Periodicals, Inc.
Graff, L; Russell, J; Seashore, J; Tate, J; Elwell, A; Prete, M; Werdmann, M; Maag, R; Krivenko, C; Radford, M
2000-11-01
To test the hypothesis that physician errors (failure to diagnose appendicitis at initial evaluation) correlate with adverse outcome. The authors also postulated that physician errors would correlate with delays in surgery, delays in surgery would correlate with adverse outcomes, and physician errors would occur on patients with atypical presentations. This was a retrospective two-arm observational cohort study at 12 acute care hospitals: 1) consecutive patients who had an appendectomy for appendicitis and 2) consecutive emergency department abdominal pain patients. Outcome measures were adverse events (perforation, abscess) and physician diagnostic performance (false-positive decisions, false-negative decisions). The appendectomy arm of the study included 1, 026 patients with 110 (10.5%) false-positive decisions (range by hospital 4.7% to 19.5%). Of the 916 patients with appendicitis, 170 (18.6%) false-negative decisions were made (range by hospital 10.6% to 27.8%). Patients who had false-negative decisions had increased risks of perforation (r = 0.59, p = 0.058) and of abscess formation (r = 0.81, p = 0.002). For admitted patients, when the inhospital delay before surgery was >20 hours, the risk of perforation was increased [2.9 odds ratio (OR) 95% CI = 1.8 to 4.8]. The amount of delay from initial physician evaluation until surgery varied with physician diagnostic performance: 7.0 hours (95% CI = 6.7 to 7.4) if the initial physician made the diagnosis, 72.4 hours (95% CI = 51.2 to 93.7) if the initial office physician missed the diagnosis, and 63.1 hours (95% CI = 47.9 to 78.4) if the initial emergency physician missed the diagnosis. Patients whose diagnosis was initially missed by the physician had fewer signs and symptoms of appendicitis than patients whose diagnosis was made initially [appendicitis score 2.0 (95% CI = 1.6 to 2.3) vs 6.5 (95% CI = 6.4 to 6.7)]. Older patients (>41 years old) had more false-negative decisions and a higher risk of perforation or abscess (3.5 OR 95% CI = 2.4 to 5.1). False-positive decisions were made for patients who had signs and symptoms similar to those of appendicitis patients [appendicitis score 5.7 (95% CI = 5.2 to 6.1) vs 6.5 (95% CI = 6.4 to 6.7)]. Female patients had an increased risk of false-positive surgery (2.3 OR 95% CI = 1.5 to 3.4). The abdominal pain arm of the study included 1,118 consecutive patients submitted by eight hospitals, with 44 patients having appendicitis. Hospitals with observation units compared with hospitals without observation units had a higher "rule out appendicitis" evaluation rate [33.7% (95% CI = 27 to 38) vs 24.7% (95% CI = 23 to 27)] and a similar hospital admission rate (27.6% vs 24.7%, p = NS). There was a lower miss-diagnosis rate (15.1% vs 19.4%, p = NS power 0.02), lower perforation rate (19.0% vs 20.6%, p = NS power 0.05), and lower abscess rate (5.6% vs 6.9%, p = NS power 0.06), but these did not reach statistical significance. Errors in physician diagnostic decisions correlated with patient clinical findings, i.e., the missed diagnoses were on appendicitis patients with few clinical findings and unnecessary surgeries were on non-appendicitis patients with clinical findings similar to those of patients with appendicitis. Adverse events (perforation, abscess formation) correlated with physician false-negative decisions.
ERIC Educational Resources Information Center
Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia
2013-01-01
Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
DOT National Transportation Integrated Search
1974-05-01
A resting 'normal' ECG can coexist with known angina pectoris, positive angiocardiography and previous myocardial infarction. In contemporary exercise ECG tests, a false positive/false negative total error of 10% is not unusual. Research aimed at imp...
Negative feedback from maternal signals reduces false alarms by collectively signalling offspring.
Hamel, Jennifer A; Cocroft, Reginald B
2012-09-22
Within animal groups, individuals can learn of a predator's approach by attending to the behaviour of others. This use of social information increases an individual's perceptual range, but can also lead to the propagation of false alarms. Error copying is especially likely in species that signal collectively, because the coordination required for collective displays relies heavily on social information. Recent evidence suggests that collective behaviour in animals is, in part, regulated by negative feedback. Negative feedback may reduce false alarms by collectively signalling animals, but this possibility has not yet been tested. We tested the hypothesis that negative feedback increases the accuracy of collective signalling by reducing the production of false alarms. In the treehopper Umbonia crassicornis, clustered offspring produce collective signals during predator attacks, advertising the predator's location to the defending mother. Mothers signal after evicting the predator, and we show that this maternal communication reduces false alarms by offspring. We suggest that maternal signals elevate offspring signalling thresholds. This is, to our knowledge, the first study to show that negative feedback can reduce false alarms by collectively behaving groups.
2014-01-01
Background The combination of single-switch access technology and scanning is the most promising means of augmentative and alternative communication for many children with severe physical disabilities. However, the physical impairment of the child and the technology’s limited ability to interpret the child’s intentions often lead to false positives and negatives (corresponding to accidental and missed selections, respectively) occurring at rates that frustrate the user and preclude functional communication. Multiple psychophysiological studies have associated cardiac deceleration and increased phasic electrodermal activity with self-realization of errors among able-bodied individuals. Thus, physiological measurements have potential utility at enhancing single-switch access, provided that such prototypical autonomic responses exist in persons with profound disabilities. Methods The present case series investigated the autonomic responses of three pediatric single-switch users with severe spastic quadriplegic cerebral palsy, in the context of a single-switch letter matching activity. Each participant exhibited distinct autonomic responses to activity engagement. Results Our analysis confirmed the presence of the autonomic response pattern of cardiac deceleration and increased phasic electrodermal activity following true positives, false positives and false negatives errors, but not subsequent to true negative outcomes. Conclusions These findings suggest that there may be merit in complementing single-switch input with autonomic measurements to improve augmentative and alternative communications for pediatric access technology users. PMID:24607065
Leung, Brian; Chau, Tom
2014-03-08
The combination of single-switch access technology and scanning is the most promising means of augmentative and alternative communication for many children with severe physical disabilities. However, the physical impairment of the child and the technology's limited ability to interpret the child's intentions often lead to false positives and negatives (corresponding to accidental and missed selections, respectively) occurring at rates that frustrate the user and preclude functional communication. Multiple psychophysiological studies have associated cardiac deceleration and increased phasic electrodermal activity with self-realization of errors among able-bodied individuals. Thus, physiological measurements have potential utility at enhancing single-switch access, provided that such prototypical autonomic responses exist in persons with profound disabilities. The present case series investigated the autonomic responses of three pediatric single-switch users with severe spastic quadriplegic cerebral palsy, in the context of a single-switch letter matching activity. Each participant exhibited distinct autonomic responses to activity engagement. Our analysis confirmed the presence of the autonomic response pattern of cardiac deceleration and increased phasic electrodermal activity following true positives, false positives and false negatives errors, but not subsequent to true negative outcomes. These findings suggest that there may be merit in complementing single-switch input with autonomic measurements to improve augmentative and alternative communications for pediatric access technology users.
Xue, Jiao-Mei; Lin, Ping-Zhen; Sun, Ji-Wei; Cao, Feng-Lin
2017-12-01
Here, we explored the functional and neural mechanisms underlying aggression related to adverse childhood experiences. We assessed behavioral performance and event-related potentials during a go/no-go and N-back paradigm. The participants were 15 individuals with adverse childhood experiences and high aggression (ACE + HA), 13 individuals with high aggression (HA), and 14 individuals with low aggression and no adverse childhood experiences (control group). The P2 latency (initial perceptual processing) was longer in the ACE + HA group for the go trials. The HA group had a larger N2 (response inhibition) than controls for the no-go trials. Error-related negativity (error processing) in the ACE + HA and HA groups was smaller than that of controls for false alarm go trials. Lastly, the ACE + HA group had shorter error-related negativity latencies than controls for false alarm trials. Overall, our results reveal the neural correlates of executive function in aggressive individuals with ACEs.
Johnson, Cheryl C; Fonner, Virginia; Sands, Anita; Ford, Nathan; Obermeyer, Carla Mahklouf; Tsui, Sharon; Wong, Vincent; Baggaley, Rachel
2017-08-29
In accordance with global testing and treatment targets, many countries are seeking ways to reach the "90-90-90" goals, starting with diagnosing 90% of all people with HIV. Quality HIV testing services are needed to enable people with HIV to be diagnosed and linked to treatment as early as possible. It is essential that opportunities to reach people with undiagnosed HIV are not missed, diagnoses are correct and HIV-negative individuals are not inadvertently initiated on life-long treatment. We conducted this systematic review to assess the magnitude of misdiagnosis and to describe poor HIV testing practices using rapid diagnostic tests. We systematically searched peer-reviewed articles, abstracts and grey literature published from 1 January 1990 to 19 April 2017. Studies were included if they used at least two rapid diagnostic tests and reported on HIV misdiagnosis, factors related to potential misdiagnosis or described quality issues and errors related to HIV testing. Sixty-four studies were included in this review. A small proportion of false positive (median 3.1%, interquartile range (IQR): 0.4-5.2%) and false negative (median: 0.4%, IQR: 0-3.9%) diagnoses were identified. Suboptimal testing strategies were the most common factor in studies reporting misdiagnoses, particularly false positive diagnoses due to using a "tiebreaker" test to resolve discrepant test results. A substantial proportion of false negative diagnoses were related to retesting among people on antiretroviral therapy. Conclusions HIV testing errors and poor practices, particularly those resulting in false positive or false negative diagnoses, do occur but are preventable. Efforts to accelerate HIV diagnosis and linkage to treatment should be complemented by efforts to improve the quality of HIV testing services and strengthen the quality management systems, particularly the use of validated testing algorithms and strategies, retesting people diagnosed with HIV before initiating treatment and providing clear messages to people with HIV on treatment on the risk of a "false negative" test result.
Finkel, Eli J; Eastwick, Paul W; Reis, Harry T
2015-02-01
In recent years, a robust movement has emerged within psychology to increase the evidentiary value of our science. This movement, which has analogs throughout the empirical sciences, is broad and diverse, but its primary emphasis has been on the reduction of statistical false positives. The present article addresses epistemological and pragmatic issues that we, as a field, must consider as we seek to maximize the scientific value of this movement. Regarding epistemology, this article contrasts the false-positives-reduction (FPR) approach with an alternative, the error balance (EB) approach, which argues that any serious consideration of optimal scientific practice must contend simultaneously with both false-positive and false-negative errors. Regarding pragmatics, the movement has devoted a great deal of attention to issues that frequently arise in laboratory experiments and one-shot survey studies, but it has devoted less attention to issues that frequently arise in intensive and/or longitudinal studies. We illustrate these epistemological and pragmatic considerations with the case of relationship science, one of the many research domains that frequently employ intensive and/or longitudinal methods. Specifically, we examine 6 research prescriptions that can help to reduce false-positive rates: preregistration, prepublication sharing of materials, postpublication sharing of data, close replication, avoiding piecemeal publication, and increasing sample size. For each, we offer concrete guidance not only regarding how researchers can improve their research practices and balance the risk of false-positive and false-negative errors, but also how the movement can capitalize upon insights from research practices within relationship science to make the movement stronger and more inclusive. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Gupta, Nalini; Banik, Tarak; Rajwanshi, Arvind; Radotra, Bishan D; Panda, Naresh; Dey, Pranab; Srinivasan, Radhika; Nijhawan, Raje
2012-01-01
This study was undertaken to evaluate the diagnostic utility and pitfalls of fine needle aspiration cytology (FNAC) in oral and oropharyngeal lesions. This was a retrospective audit of oral and oropharyngeal lesions diagnosed with FNAC over a period of six years (2005-2010). Oral/oropharyngeal lesions [n=157] comprised 0.35% of the total FNAC load. The age ranged 1-80 years with the male: female ratio being 1.4:1. Aspirates were inadequate in 7% cases. Histopathology was available in 73/157 (46.5%) cases. Palate was the most common site of involvement [n=66] followed by tongue [n=35], buccal mucosa [n=18], floor of the mouth [n=17], tonsil [n=10], alveolus [n=5], retromolar trigone [n=3], and posterior pharyngeal wall [n=3]. Cytodiagnoses were categorized into infective/inflammatory lesions and benign cysts, and benign and malignant tumours. Uncommon lesions included ectopic lingual thyroid and adult rhabdomyoma of tongue, and solitary fibrous tumor (SFT), and leiomyosarcoma in buccal mucosa. A single false-positive case was dense inflammation with squamous cells misinterpreted as squamous cell carcinoma (SCC) on cytology. There were eight false-negative cases mainly due to sampling error. One false-negative case due to interpretation error was in a salivary gland tumor. The sensitivity of FNAC in diagnosing oral/oropharyngeal lesions was 71.4%; specificity was 97.8% with diagnostic accuracy of 87.7%. Salivary gland tumors and squamous cell carcinoma (SCC) are the most common lesions seen in the oral cavity. FNAC proves to be highly effective in diagnosing the spectrum of different lesions in this region. Sampling error is the main cause of false-negative cases in this region.
Kermani, Bahram G
2016-07-01
Crystal Genetics, Inc. is an early-stage genetic test company, focused on achieving the highest possible clinical-grade accuracy and comprehensiveness for detecting germline (e.g., in hereditary cancer) and somatic (e.g., in early cancer detection) mutations. Crystal's mission is to significantly improve the health status of the population, by providing high accuracy, comprehensive, flexible and affordable genetic tests, primarily in cancer. Crystal's philosophy is that when it comes to detecting mutations that are strongly correlated with life-threatening diseases, the detection accuracy of every single mutation counts: a single false-positive error could cause severe anxiety for the patient. And, more importantly, a single false-negative error could potentially cost the patient's life. Crystal's objective is to eliminate both of these error types.
Introducing Bayesian thinking to high-throughput screening for false-negative rate estimation.
Wei, Xin; Gao, Lin; Zhang, Xiaolei; Qian, Hong; Rowan, Karen; Mark, David; Peng, Zhengwei; Huang, Kuo-Sen
2013-10-01
High-throughput screening (HTS) has been widely used to identify active compounds (hits) that bind to biological targets. Because of cost concerns, the comprehensive screening of millions of compounds is typically conducted without replication. Real hits that fail to exhibit measurable activity in the primary screen due to random experimental errors will be lost as false-negatives. Conceivably, the projected false-negative rate is a parameter that reflects screening quality. Furthermore, it can be used to guide the selection of optimal numbers of compounds for hit confirmation. Therefore, a method that predicts false-negative rates from the primary screening data is extremely valuable. In this article, we describe the implementation of a pilot screen on a representative fraction (1%) of the screening library in order to obtain information about assay variability as well as a preliminary hit activity distribution profile. Using this training data set, we then developed an algorithm based on Bayesian logic and Monte Carlo simulation to estimate the number of true active compounds and potential missed hits from the full library screen. We have applied this strategy to five screening projects. The results demonstrate that this method produces useful predictions on the numbers of false negatives.
Piketty, Marie-Liesse; Polak, Michel; Flechtner, Isabelle; Gonzales-Briceño, Laura; Souberbielle, Jean-Claude
2017-05-01
Immunoassays are now commonly used for hormone measurement, in high throughput analytical platforms. Immunoassays are generally robust to interference. However, endogenous analytical error may occur in some patients; this may be encountered in biotin supplementation or in the presence of anti-streptavidin antibody, in immunoassays involving streptavidin-biotin interaction. In these cases, the interference may induce both false positive and false negative results, and simulate a seemingly coherent hormonal profile. It is to be feared that this type of errors will be more frequently observed. This review underlines the importance of keeping close interactions between biologists and clinicians to be able to correlate the hormonal assay results with the clinical picture.
1999-01-01
34. twenty-first century. These papers illustrate topics such as a development ofvirtual environment applications, different uses ofVRML in information system...interfaces, an examination of research in virtual reality environment interfaces, and five approaches to supporting changes’ in virtuaI environments...we get false negatives that contribute to the probability of false rejection Prrj). { l � Taking these error probabilities into account, we define a
Intra-operative Localization of Brachytherapy Implants Using Intensity-based Registration
KarimAghaloo, Z.; Abolmaesumi, P.; Ahmidi, N.; Chen, T.K.; Gobbi, D. G.; Fichtinger, G.
2010-01-01
In prostate brachytherapy, a transrectal ultrasound (TRUS) will show the prostate boundary but not all the implanted seeds, while fluoroscopy will show all the seeds clearly but not the boundary. We propose an intensity-based registration between TRUS images and the implant reconstructed from uoroscopy as a means of achieving accurate intra-operative dosimetry. The TRUS images are first filtered and compounded, and then registered to the uoroscopy model via mutual information. A training phantom was implanted with 48 seeds and imaged. Various ultrasound filtering techniques were analyzed, and the best results were achieved with the Bayesian combination of adaptive thresholding, phase congruency, and compensation for the non-uniform ultrasound beam profile in the elevation and lateral directions. The average registration error between corresponding seeds relative to the ground truth was 0.78 mm. The effect of false positives and false negatives in ultrasound were investigated by masking true seeds in the uoroscopy volume or adding false seeds. The registration error remained below 1.01 mm when the false positive rate was 31%, and 0.96 mm when the false negative rate was 31%. This fully automated method delivers excellent registration accuracy and robustness in phantom studies, and promises to demonstrate clinically adequate performance on human data as well. Keywords: Prostate brachytherapy, Ultrasound, Fluoroscopy, Registration. PMID:21152376
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O
2010-01-01
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.
2014-01-01
The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of middle-eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERN’s were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERN’s, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of middle-eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to middle-eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. PMID:19813139
Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.
Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike
2014-07-01
Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.
Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel
2018-06-19
This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.
Analysis of false results in a series of 835 fine needle aspirates of breast lesions.
Willis, S L; Ramzy, I
1995-01-01
To analyze cases of false diagnoses from a large series to help increase the accuracy of fine needle aspiration of palpable breast lesions. The results of FNA of 835 palpable breast lesions were analyzed to determine the reasons for false positive, false negative and false suspicious diagnoses. Of the 835 aspirates, 174 were reported as positive, 549 as negative and 66 as suspicious or atypical but not diagnostic of malignancy. Forty-six cases were considered unsatisfactory. Tissue was available for comparison in 286 cases. The cytologic diagnoses in these cases were reported as follows: positive, 125 (43.7%); suspicious, 33 (11.5%); atypical, 18 (6.2%); negative, 92 (32%); and unsatisfactory, 18 (6.2%). There was one false positive diagnosis, yielding a false positive rate of 0.8%. This lesion was a case of fibrocystic change with hyperplasia, focal fat necrosis and reparative atypia. There were 14 false negative cases, resulting in a false negative rate of 13.2%. Nearly all these cases were sampling errors and included infiltrating ductal carcinomas (9), ductal carcinomas in situ (2), infiltrating lobular carcinomas (2) and tubular carcinoma (1). Most of the suspicious and atypical lesions proved to be carcinomas (35/50). The remainder were fibroadenomas (6), fibrocystic change (4), gynecomastia (2), adenosis (2) and granulomatous mastitis (1). A positive diagnosis of malignancy by FNA is reliable in establishing the diagnosis and planning the treatment of breast cancer. The false-positive rate is very low, with only a single case reported in 835 aspirates. Most false negatives are due to sampling and not to interpretive difficulties. The category "suspicious but not diagnostic of malignancy" serves a useful purpose in management of patients with breast lumps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haase, G.M.; Sfakianakis, G.N.; Lobe, T.E.
1981-06-01
The ability of external imaging to demonstrate intestinal infarction in neonatal necrotizing enterocolitis (NEC) was prospectively evaluated. The radiopharmaceutical technetium--99m diphosphonate was injected intravenously and the patients subsequently underwent abdominal scanning. Clinical patient care and interpretation of the images were entirely independent throughout the study. Of 33 studies, 7 were positive, 4 were suspicious, and 22 were negative. One false positive study detected ischemia without transmural infarction. The second false positive scan occurred postoperatively and was due to misinterpretation of the hyperactivity along the surgical incision. None of the suspicious cases had damaged bowel. The two false negative studies clearlymore » failed to demonstrate frank intestinal necrosis. The presence of very small areas of infarction, errors in technical settings, subjective interpretation of scans and delayed clearance of the radionuclide in a critically ill neonate may all limit the accuracy of external abdominal scanning. However, in spite of an error rate of 12%, it is likely that this technique will enhance the present clinical, laboratory, and radiologic parameters of patient management in NEC.« less
ClubSub-P: Cluster-Based Subcellular Localization Prediction for Gram-Negative Bacteria and Archaea
Paramasivam, Nagarajan; Linke, Dirk
2011-01-01
The subcellular localization (SCL) of proteins provides important clues to their function in a cell. In our efforts to predict useful vaccine targets against Gram-negative bacteria, we noticed that misannotated start codons frequently lead to wrongly assigned SCLs. This and other problems in SCL prediction, such as the relatively high false-positive and false-negative rates of some tools, can be avoided by applying multiple prediction tools to groups of homologous proteins. Here we present ClubSub-P, an online database that combines existing SCL prediction tools into a consensus pipeline from more than 600 proteomes of fully sequenced microorganisms. On top of the consensus prediction at the level of single sequences, the tool uses clusters of homologous proteins from Gram-negative bacteria and from Archaea to eliminate false-positive and false-negative predictions. ClubSub-P can assign the SCL of proteins from Gram-negative bacteria and Archaea with high precision. The database is searchable, and can easily be expanded using either new bacterial genomes or new prediction tools as they become available. This will further improve the performance of the SCL prediction, as well as the detection of misannotated start codons and other annotation errors. ClubSub-P is available online at http://toolkit.tuebingen.mpg.de/clubsubp/ PMID:22073040
Precision and recall estimates for two-hybrid screens
Huang, Hailiang; Bader, Joel S.
2009-01-01
Motivation: Yeast two-hybrid screens are an important method to map pairwise protein interactions. This method can generate spurious interactions (false discoveries), and true interactions can be missed (false negatives). Previously, we reported a capture–recapture estimator for bait-specific precision and recall. Here, we present an improved method that better accounts for heterogeneity in bait-specific error rates. Result: For yeast, worm and fly screens, we estimate the overall false discovery rates (FDRs) to be 9.9%, 13.2% and 17.0% and the false negative rates (FNRs) to be 51%, 42% and 28%. Bait-specific FDRs and the estimated protein degrees are then used to identify protein categories that yield more (or fewer) false positive interactions and more (or fewer) interaction partners. While membrane proteins have been suggested to have elevated FDRs, the current analysis suggests that intrinsic membrane proteins may actually have reduced FDRs. Hydrophobicity is positively correlated with decreased error rates and fewer interaction partners. These methods will be useful for future two-hybrid screens, which could use ultra-high-throughput sequencing for deeper sampling of interacting bait–prey pairs. Availability: All software (C source) and datasets are available as supplemental files and at http://www.baderzone.org under the Lesser GPL v. 3 license. Contact: joel.bader@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19091773
Stress and emotional valence effects on children's versus adolescents' true and false memory.
Quas, Jodi A; Rush, Elizabeth B; Yim, Ilona S; Edelstein, Robin S; Otgaar, Henry; Smeets, Tom
2016-01-01
Despite considerable interest in understanding how stress influences memory accuracy and errors, particularly in children, methodological limitations have made it difficult to examine the effects of stress independent of the effects of the emotional valence of to-be-remembered information in developmental populations. In this study, we manipulated stress levels in 7-8- and 12-14-year-olds and then exposed them to negative, neutral, and positive word lists. Shortly afterward, we tested their recognition memory for the words and false memory for non-presented but related words. Adolescents in the high-stress condition were more accurate than those in the low-stress condition, while children's accuracy did not differ across stress conditions. Also, among adolescents, accuracy and errors were higher for the negative than positive words, while in children, word valence was unrelated to accuracy. Finally, increases in children's and adolescents' cortisol responses, especially in the high-stress condition, were related to greater accuracy but not false memories and only for positive emotional words. Findings suggest that stress at encoding, as well as the emotional content of to-be-remembered information, may influence memory in different ways across development, highlighting the need for greater complexity in existing models of true and false memory formation.
Rostron, Peter D; Heathcote, John A; Ramsey, Michael H
2014-12-01
High-coverage in situ surveys with gamma detectors are the best means of identifying small hotspots of activity, such as radioactive particles, in land areas. Scanning surveys can produce rapid results, but the probabilities of obtaining false positive or false negative errors are often unknown, and they may not satisfy other criteria such as estimation of mass activity concentrations. An alternative is to use portable gamma-detectors that are set up at a series of locations in a systematic sampling pattern, where any positive measurements are subsequently followed up in order to determine the exact location, extent and nature of the target source. The preliminary survey is typically designed using settings of detector height, measurement spacing and counting time that are based on convenience, rather than using settings that have been calculated to meet requirements. This paper introduces the basis of a repeatable method of setting these parameters at the outset of a survey, for pre-defined probabilities of false positive and false negative errors in locating spatially small radioactive particles in land areas. It is shown that an un-collimated detector is more effective than a collimated detector that might typically be used in the field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Johnson, Cheryl C.; Fonner, Virginia; Sands, Anita; Ford, Nathan; Obermeyer, Carla Mahklouf; Tsui, Sharon; Wong, Vincent; Baggaley, Rachel
2017-01-01
Abstract Introduction: In accordance with global testing and treatment targets, many countries are seeking ways to reach the “90-90-90” goals, starting with diagnosing 90% of all people with HIV. Quality HIV testing services are needed to enable people with HIV to be diagnosed and linked to treatment as early as possible. It is essential that opportunities to reach people with undiagnosed HIV are not missed, diagnoses are correct and HIV-negative individuals are not inadvertently initiated on life-long treatment. We conducted this systematic review to assess the magnitude of misdiagnosis and to describe poor HIV testing practices using rapid diagnostic tests. Methods: We systematically searched peer-reviewed articles, abstracts and grey literature published from 1 January 1990 to 19 April 2017. Studies were included if they used at least two rapid diagnostic tests and reported on HIV misdiagnosis, factors related to potential misdiagnosis or described quality issues and errors related to HIV testing. Results: Sixty-four studies were included in this review. A small proportion of false positive (median 3.1%, interquartile range (IQR): 0.4-5.2%) and false negative (median: 0.4%, IQR: 0-3.9%) diagnoses were identified. Suboptimal testing strategies were the most common factor in studies reporting misdiagnoses, particularly false positive diagnoses due to using a “tiebreaker” test to resolve discrepant test results. A substantial proportion of false negative diagnoses were related to retesting among people on antiretroviral therapy. Conclusions: HIV testing errors and poor practices, particularly those resulting in false positive or false negative diagnoses, do occur but are preventable. Efforts to accelerate HIV diagnosis and linkage to treatment should be complemented by efforts to improve the quality of HIV testing services and strengthen the quality management systems, particularly the use of validated testing algorithms and strategies, retesting people diagnosed with HIV before initiating treatment and providing clear messages to people with HIV on treatment on the risk of a “false negative” test result. PMID:28872271
Renshaw, A A; Lezon, K M; Wilbur, D C
2001-04-25
Routine quality control rescreening often is used to calculate the false-negative rate (FNR) of gynecologic cytology. Theoretic analysis suggests that this is not appropriate, due to the high FNR of rescreening and the inability to actually measure it. The authors sought to determine the FNR of manual rescreening in a large, prospective, two-arm clinical trial using an analytic instrument in the evaluation. The results of the Autopap System Clinical Trial, encompassing 25,124 analyzed slides, were reviewed. The false-negative and false-positive rates at various thresholds were determined for routine primary screening, routine rescreening, Autopap primary screening, and Autopap rescreening by using a simple, standard methodology. The FNR of routine manual rescreening at the level of atypical squamous cells of undetermined significance (ASCUS) was 73%, more than 3 times the FNR of primary screening; 11 cases were detected. The FNR of Autopap rescreening was 34%; 80 cases were detected. Routine manual rescreening decreased the laboratory FNR by less than 1%; Autopap rescreening reduced the overall laboratory FNR by 5.7%. At the same time, the false-positive rate for Autopap screening was significantly less than that of routine manual screening at the ASCUS level (4.7% vs. 5.6%; P < 0.0001). Rescreening with the Autopap system remained more sensitive than manual rescreening at the low grade squamous intraepithelial lesions threshold (FNR of 58.8% vs. 100%, respectively), although the number of cases rescreened was low. Routine manual rescreening cannot be used to calculate the FNR of primary screening. Routine rescreening is an extremely ineffective method to detect error and thereby decrease a laboratory's FNR. The Autopap system is a much more effective way of detecting errors within a laboratory and reduces the laboratory's FNR by greater than 25%.
Tirnaksiz, M B; Deschamps, C; Allen, M S; Johnson, D C; Pairolero, P C
2005-01-01
Aqueous contrast swallow study is recommended as a screening procedure for the evaluation of esophageal anastomotic integrity following esophagectomy. The aim of this study was to assess the accuracy of water-soluble contrast swallow screening as a predictor of clinically significant anastomotic leak in patients with esophagectomy. The records of 505 consecutive patients undergoing esophagectomy in Mayo Clinic from January 1991 through December 1995 were retrospectively reviewed. 464 (92%) patients had water-soluble contrast swallows performed in the early postoperative period (median postoperative day 7, range 4-11 days). A total of 39 radiological leaks were obtained but only 17 of these had clinical signs of anastomotic leakage. Furthermore, 25 patients who had normal swallow study developed a clinical anastomotic leak. There were therefore 22 (4.7%) false positive and 25 (5.4%) false negative results giving values for the specificity, sensitivity and false negative error rate of the radiological examination of 94.7, 40.4, and 59.5% respectively. Aspiration of the contrast agent was noted on fluoroscopy in 30 (6.5%) patients. Only 2 (0.4%) patients developed aqueous contrast agent-caused aspiration pneumonia. There was no procedure-related mortality. While radiological assessment of esophageal anastomoses in the early postoperative period using aqueous contrast agents appears to be a relatively safe procedure, the poor sensitivity and high false negative error rate of this technique, when performed on postoperative day 7 and in a series with clinical anastomotic leak rate of 9%, is insufficient for it to be worthwhile as a screening procedure. Copyright (c) 2005 S. Karger AG, Basel.
Development of sensitivity to orthographic errors in children: An event-related potential study.
Heldmann, Marcus; Puppe, Svetlana; Effenberg, Alfred O; Münte, Thomas F
2017-09-01
To study the development of orthographic sensitivity during elementary school, we recorded event-related brain potentials (ERPs) from 2nd and 4th grade children who were exposed to line drawing of object or animals upon which the correctly or incorrectly spelled name was superimposed. Stimulus-locked ERPs showed a modulation of a frontocentral negativity between 200 and 500ms which was larger for the 4th grade children but did not show an effect of correctness of spelling. This effect was followed by a pronounced positive shift which was only seen in the 4th grade children and which showed a modulation of spelling correctness. This effect can be seen as an electrophysiological correlate of orthographic sensitivity and replicates earlier findings in adults. Moreover, response-locked ERPs triggered to the children's button presses indicating orthographic (in)-correctness showed a succession of waves including the frontocentral error-related negativity and a subsequent negativity with a more posterior distribution. This latter negativity was generally larger for the 4th grade children. Only for the 4th grade children, this negativity was smaller for the false alarm trials suggesting a conscious registration of the error in these children. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
A critical reappraisal of false negative sentinel lymph node biopsy in melanoma.
Manca, G; Romanini, A; Rubello, D; Mazzarri, S; Boni, G; Chiacchio, S; Tredici, M; Duce, V; Tardelli, E; Volterrani, D; Mariani, G
2014-06-01
Lymphatic mapping and sentinel lymph node biopsy (SLNB) have completely changed the clinical management of cutaneous melanoma. This procedure has been accepted worldwide as a recognized method for nodal staging. SLNB is able to accurately determine nodal basin status, providing the most useful prognostic information. However, SLNB is not a perfect diagnostic test. Several large-scale studies have reported a relatively high false-negative rate (5.6-21%), correctly defined as the proportion of false-negative results with respect to the total number of "actual" positive lymph nodes. The main purpose of this review is to address the technical issues that nuclear physicians, surgeons, and pathologists should carefully consider to improve the accuracy of SLNB by minimizing its false-negative rate. In particular, SPECT/CT imaging has demonstrated to be able to identify a greater number of sentinel lymph nodes (SLNs) than those found by planar lymphoscintigraphy. Furthermore, a unique definition in the international guidelines is missing for the operational identification of SLNs, which may be partly responsible for this relatively high false-negative rate of SLNB. Therefore, it is recommended for the scientific community to agree on the radioactive counting rate threshold so that the surgeon can be better radioguided to detect all the lymph nodes which are most likely to harbor metastases. Another possible source of error may be linked to the examination of the harvested SLNs by conventional histopathological methods. A more careful and extensive SLN analysis (e.g. molecular analysis by RT-PCR) is able to find more positive nodes, so that the false-negative rate is reduced. Older age at diagnosis, deeper lesions, histologic ulceration, head-neck anatomical location of primary lesions are the clinical factors associated with false-negative SLNBs in melanoma patients. There is still much controversy about the clinical significance of a false-negative SLNB on the prognosis of melanoma patients. Indeed, most studies have failed to show that there is worse melanoma-specific survival for false-negative compared to true-positive SLNB patients.
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2015-04-01
Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.
Causes of false-negative for high-grade urothelial carcinoma in urine cytology.
Lee, Paul J; Owens, Christopher L; Lithgow, Marie Y; Jiang, Zhong; Fischer, Andrew H
2016-12-01
The Paris System for classifying urine cytology emphasizes identification of high-grade urothelial carcinoma (HGUC). The causes of false-negative urine cytologies (UC) within this system are not well described. We identified 660 cases between 2005 and 2013 with both UC and subsequent cystoscopic biopsies. UC were classified as either Negative for HGUC or "Abnormal" ("Atypical", "Suspicious", and "Malignant"). Apparent false-negative cases were reviewed in a nonblinded fashion by two cytopathologists and two subspecialized genitourinary pathologists. A total of 199 of the 660 cases (30%) were histologically diagnosed as HGUC. The UC were "Abnormal" in 170/199 cases (sensitivity/specificity of 86%/71%). Twenty four apparent false negative cases were available for retrospective review. Five of 24 (21%) cystoscopic biopsies were found not to be HGUC on review (one false positive and four low-grade urothelial carcinoma (LGUC on review). Of the remaining 19 UC, 7 (29%) cytology samples were found to be truly negative on review, 11 (46%) were found to be Atypical, and 1 (4%) suspicious. Of the 12 UC that were at least "Atypical" with histologic HGUC on review: six misses (half) were attributed to obscuring inflammation/blood, four to poor preservation, eight to paucity of abnormal cells, and 1 case to interpretive error; many cases demonstrated overlapping reasons. About one fifth of apparent false negative diagnoses for HGUC can be because of overdiagnosis of HGUC by surgical pathologists. If poor preservation or obscured samples are called nondiagnostic, the sensitivity/specificity of UC for HGUC can be as high as 94%/71%. Diagn. Cytopathol. 2016;44:994-999. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Supporting diagnosis of attention-deficit hyperactive disorder with novelty detection.
Lee, Hyoung-Joo; Cho, Sungzoon; Shin, Min-Sup
2008-03-01
Computerized continuous performance test (CPT) is a widely used diagnostic tool for attention-deficit hyperactivity disorder (ADHD). It measures the number of correctly detected stimuli as well as response times. Typically, when calculating a cut-off score for discriminating between normal and abnormal, only the normal children's data are collected. Then the average and standard deviation of each measure or variable is computed. If any of variables is larger than 2 sigma above the average, that child is diagnosed as abnormal. We will call this approach as "T-score 70" classifier. However, its performance has a lot to be desired due to a high false negative error. In order to improve the classification accuracy we propose to use novelty detection approaches for supporting ADHD diagnosis. Novelty detection is a model building framework where a classifier is constructed using only one class of training data and a new input pattern is classified according to its similarity to the training data. A total of eight novelty detectors are introduced and applied to our ADHD datasets collected from two modes of tests, visual and auditory. They are evaluated and compared with the T-score model on validation datasets in terms of false positive and negative error rates, and area under receiver operating characteristics curve (AuROC). Experimental results show that the cut-off score of 70 is suboptimal which leads to a low false positive error but a very high false negative error. A few novelty detectors such as Parzen density estimators yield much more balanced classification performances. Moreover, most novelty detectors outperform the T-score method for most age groups statistically with a significance level of 1% in terms of AuROC. In particular, we recommend the Parzen and Gaussian density estimators, kernel principal component analysis, one-class support vector machine, and K-means clustering novelty detector which can improve upon the T-score method on average by at least 30% for the visual test and 40% for the auditory test. In addition, their performances are relatively stable over various parameter values as long as they are within reasonable ranges. The proposed novelty detection approaches can replace the T-score method which has been considered the "gold standard" for supporting ADHD diagnosis. Furthermore, they can be applied to other psychological tests where only normal data are available.
Arousal-But Not Valence-Reduces False Memories at Retrieval.
Mirandola, Chiara; Toffalini, Enrico
2016-01-01
Mood affects both memory accuracy and memory distortions. However, some aspects of this relation are still poorly understood: (1) whether valence and arousal equally affect false memory production, and (2) whether retrieval-related processes matter; the extant literature typically shows that mood influences memory performance when it is induced before encoding, leaving unsolved whether mood induced before retrieval also impacts memory. We examined how negative, positive, and neutral mood induced before retrieval affected inferential false memories and related subjective memory experiences. A recognition-memory paradigm for photographs depicting script-like events was employed. Results showed that individuals in both negative and positive moods-similar in arousal levels-correctly recognized more target events and endorsed fewer false memories (and these errors were linked to remember responses less frequently), compared to individuals in neutral mood. This suggests that arousal (but not valence) predicted memory performance; furthermore, we found that arousal ratings provided by participants were more adequate predictors of memory performance than their actual belonging to either positive, negative or neutral mood groups. These findings suggest that arousal has a primary role in affecting memory, and that mood exerts its power on true and false memory even when induced at retrieval.
Arousal—But Not Valence—Reduces False Memories at Retrieval
Mirandola, Chiara; Toffalini, Enrico
2016-01-01
Mood affects both memory accuracy and memory distortions. However, some aspects of this relation are still poorly understood: (1) whether valence and arousal equally affect false memory production, and (2) whether retrieval-related processes matter; the extant literature typically shows that mood influences memory performance when it is induced before encoding, leaving unsolved whether mood induced before retrieval also impacts memory. We examined how negative, positive, and neutral mood induced before retrieval affected inferential false memories and related subjective memory experiences. A recognition-memory paradigm for photographs depicting script-like events was employed. Results showed that individuals in both negative and positive moods–similar in arousal levels–correctly recognized more target events and endorsed fewer false memories (and these errors were linked to remember responses less frequently), compared to individuals in neutral mood. This suggests that arousal (but not valence) predicted memory performance; furthermore, we found that arousal ratings provided by participants were more adequate predictors of memory performance than their actual belonging to either positive, negative or neutral mood groups. These findings suggest that arousal has a primary role in affecting memory, and that mood exerts its power on true and false memory even when induced at retrieval. PMID:26938737
A Study of False-Positive and False-Negative Error Rates in Cartridge Case Comparisons
2014-04-07
materials for the study, in particular Vicki Sieve. 3 Abstract: This report provides the details for a study designed to...participate in ASCLD were provided with 15 sets of 3 known + 1 unknown cartridge cases fired from a collection of 25 new Ruger SR9 handguns . The...answer sheet allowing for the AFTE range of conclusions, and return shipping materials . They were also asked to assess how many of the 3 knowns were
Sentinel lymph node mapping in melanoma: the issue of false-negative findings.
Manca, Gianpiero; Rubello, Domenico; Romanini, Antonella; Boni, Giuseppe; Chiacchio, Serena; Tredici, Manuel; Mazzarri, Sara; Duce, Valerio; Colletti, Patrick M; Volterrani, Duccio; Mariani, Giuliano
2014-07-01
Management of cutaneous melanoma has changed after introduction in the clinical routine of sentinel lymph node biopsy (SLNB) for nodal staging. By defining the nodal basin status, SLNB provides a powerful prognostic information. Nevertheless, some debate still surrounds the accuracy of this procedure in terms of false-negative rate. Several large-scale studies have reported a relatively high false-negative rate (5.6%-21%), correctly defined as the proportion of false-negative results with respect to the total number of "actual" positive lymph nodes. In this review, we identified all the technical aspects that the nuclear medicine physician, the surgeon, and the pathologist should take into account to improve accuracy of the procedure and minimize the false-negative rate. In particular, SPECT/CT imaging detects more SLNs than those found by planar lymphoscintigraphy. Furthermore, the nuclear medicine community should reach a consensus on the radioactive counting rate threshold to better guide the surgeon in identifying the lymph nodes with the highest likelihood of housing metastases ("true biologic SLNs"). Analysis of the harvested SLNs by conventional techniques is also a further potential source for error. More accurate SLN analysis (eg, molecular analysis by reverse transcriptase-polymerase chain reaction) and more extensive SLN sampling identify more positive nodes, thus reducing the false-negative rate.The clinical factors identifying patients at higher-risk local recurrence after a negative SLNB include older age at diagnosis, deeper lesions, histological ulceration, and head-neck anatomic location of the primary lesion.The clinical impact of a false-negative SLNB on the prognosis of melanoma patients remains controversial, because the majority of studies have failed to demonstrate overall statistically significant disadvantage in melanoma-specific survival for false-negative SLNB patients compared with true-positive SLNB patients.When new more effective drugs will be available in the adjuvant setting for stage III melanoma patients, the implication of an accurate staging procedure for the sentinel lymph nodes will be crucial for both patients and clinicians. Standardization and accuracy of SLN identification, removal, and analysis are required.
Miller, David A W; Nichols, James D; Gude, Justin A; Rich, Lindsey N; Podruzny, Kevin M; Hines, James E; Mitchell, Michael S
2013-01-01
Large-scale presence-absence monitoring programs have great promise for many conservation applications. Their value can be limited by potential incorrect inferences owing to observational errors, especially when data are collected by the public. To combat this, previous analytical methods have focused on addressing non-detection from public survey data. Misclassification errors have received less attention but are also likely to be a common component of public surveys, as well as many other data types. We derive estimators for dynamic occupancy parameters (extinction and colonization), focusing on the case where certainty can be assumed for a subset of detections. We demonstrate how to simultaneously account for non-detection (false negatives) and misclassification (false positives) when estimating occurrence parameters for gray wolves in northern Montana from 2007-2010. Our primary data source for the analysis was observations by deer and elk hunters, reported as part of the state's annual hunter survey. This data was supplemented with data from known locations of radio-collared wolves. We found that occupancy was relatively stable during the years of the study and wolves were largely restricted to the highest quality habitats in the study area. Transitions in the occupancy status of sites were rare, as occupied sites almost always remained occupied and unoccupied sites remained unoccupied. Failing to account for false positives led to over estimation of both the area inhabited by wolves and the frequency of turnover. The ability to properly account for both false negatives and false positives is an important step to improve inferences for conservation from large-scale public surveys. The approach we propose will improve our understanding of the status of wolf populations and is relevant to many other data types where false positives are a component of observations.
Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.
Cotton, Sue M; Crewther, David P; Crewther, Sheila G
2005-08-01
The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.
NASA Astrophysics Data System (ADS)
Wang, Wenbo; Paliwal, Jitendra
2005-09-01
With the outbreak of Bovine Spongiform Encephalopathy (BSE) (commonly known as mad cow disease) in 1987 in the United Kingdom and a recent case discovered in Alberta, more and more emphasis is placed on food and farm feed quality and safety issues internationally. The disease is believed to be spread through farm feed contamination by animal byproducts in the form of meat-and-bone-meal (MBM). The paper reviewed the available techniques necessary to the enforcement of legislation concerning the feed safety issues. The standard microscopy method, although highly sensitive, is laborious and costly. A method to routinely screen farm feed contamination certainly helps to reduce the complexity of safety inspection. A hyperspectral imaging system working in the near-infrared wavelength region of 1100-1600 nm was used to study the possibility of detection of ground broiler feed contamination by ground pork. Hyperspectral images of raw broiler feed, ground broiler feed, ground pork, and contaminated feed samples were acquired. Raw broiler feed samples were found to possess comparatively large spectral variations due to light scattering effect. Ground feed adulterated with 1%, 3%, 5%, and 10% of ground pork was tested to identify feed contamination. Discriminant analysis using Mahalanobis distance showed that the model trained using pure ground feed samples and pure ground pork samples resulted in 100% false negative errors for all test replicates of contaminated samples. A discriminant model trained with pure ground feed samples and 10% contamination level samples resulted in 12.5% false positive error and 0% false negative error.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks
This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).
Do juries meet our expectations?
Arkes, Hal R; Mellers, Barbara A
2002-12-01
Surveys of public opinion indicate that people have high expectations for juries. When it comes to serious crimes, most people want errors of convicting the innocent (false positives) or acquitting the guilty (false negatives) to fall well below 10%. Using expected utility theory, Bayes' Theorem, signal detection theory, and empirical evidence from detection studies of medical decision making, eyewitness testimony, and weather forecasting, we argue that the frequency of mistakes probably far exceeds these "tolerable" levels. We are not arguing against the use of juries. Rather, we point out that a closer look at jury decisions reveals a serious gap between what we expect from juries and what probably occurs. When deciding issues of guilt and/or punishing convicted criminals, we as a society should recognize and acknowledge the abundance of error.
The role of ethics in shale gas policies.
de Melo-Martín, Inmaculada; Hays, Jake; Finkel, Madelon L
2014-02-01
The United States has experienced a boom in natural gas production due to recent technological innovations that have enabled natural gas to be produced from unconventional sources, such as shale. There has been much discussion about the costs and benefits of developing shale gas among scientists, policy makers, and the general public. The debate has typically revolved around potential gains in economics, employment, energy independence, and national security as well as potential harms to the environment, the climate, and public health. In the face of scientific uncertainty, national and international governments must make decisions on how to proceed. So far, the results have been varied, with some governments banning the process, others enacting moratoria until it is better understood, and others explicitly sanctioning shale gas development. These policies reflect legislature's preferences to avoid false negative errors or false positive ones. Here we argue that policy makers have a prima facie duty to minimize false negatives based on three considerations: (1) protection from serious harm generally takes precedence over the enhancement of welfare; (2) minimizing false negatives in this case is more respectful to people's autonomy; and (3) alternative solutions exist that may provide many of the same benefits while minimizing many of the harms. © 2013.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju
2015-01-01
Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982
Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T
2018-05-01
We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
Quality assuring HIV point of care testing using whole blood samples.
Dare-Smith, Raellene; Badrick, Tony; Cunningham, Philip; Kesson, Alison; Badman, Susan
2016-08-01
The Royal College of Pathologists Australasia Quality Assurance Programs (RCPAQAP), have offered dedicated external quality assurance (EQA) for HIV point of care testing (PoCT) since 2011. Prior to this, EQA for these tests was available within the comprehensive human immunodeficiency virus (HIV) module. EQA testing for HIV has typically involved the supply of serum or plasma, while in the clinic or community based settings HIV PoCT is generally performed using whole blood obtained by capillary finger-stick collection. RCPAQAP has offered EQA for HIV PoCT using stabilised whole blood since 2014. A total of eight surveys have been undertaken over a period of 2 years from 2014 to 2015. Of the 962 responses received, the overall consensus rate was found to be 98% (941/962). A total of 21 errors were detected. The majority of errors were attributable to false reactive HIV p24 antigen results (9/21, 43%), followed by false reactive HIV antibody results (8/21, 38%). There were 4/21 (19%) false negative HIV antibody results and no false negative HIV p24 antigen results reported. Overall performance was observed to vary minimally between surveys, from a low of 94% up to 99% concordant. Encouraging levels of testing proficiency for HIV PoCT are indicated by these data, but they also confirm the need for HIV PoCT sites to participate in external quality assurance programs to ensure the ongoing provision of high quality patient care. Copyright © 2016 Royal College of Pathologists of Australasia. All rights reserved.
Savant, Deepika; Bajaj, Jaya; Gimenez, Cecilia; Rafael, Oana C; Mirzamani, Neda; Chau, Karen; Klein, Melissa; Das, Kasturi
2017-01-01
Urine cytology is the most frequently utilized test to detect urothelial cancer. Secondary bladder neoplasms need to be recognized as this impacts patient management. We report our experience on nonurothelial malignancies (NUM) detected in urine cytology over a 10-year period. A 10-year retrospective search for patients with biopsy-proven NUM to the urothelial tract yielded 25 urine samples from 14 patients. Two cytopathologists blinded to the original cytology diagnosis reviewed the cytology and histology slides. The incidence, cytomorphologic features, diagnostic accuracy, factors influencing the diagnostic accuracy, and clinical impact of the cytology result were studied. The incidence of NUM was <1%. The male:female ratio was 1.3. An abnormality was detected in 60% of the cases; however, in only 4% of the cases, a primary site was identified accurately. Of the false negatives, 96% was deemed as sampling errors and 4% was interpretational. Patient management was not impacted in any of the false-negative cases due to concurrent or past tissue diagnosis. Colon cancer was the most frequent secondary tumor. Sampling error attributed to the false-negative results. Necrosis and dirty background was often associated with metastatic lesions from colon. Obtaining history of a primary tumor elsewhere was a key factor in diagnosis of a metastatic lesion. Hematopoietic malignancies remain to be a diagnostic challenge. Cytospin preparations were superior for evaluating nuclear detail and background material as opposed to monolayer (Thinprep) technology. Diagnostic accuracy was improved by obtaining immunohistochemistry. Diagn. Cytopathol. 2016. © 2016 Wiley Periodicals, Inc. Diagn. Cytopathol. 2017;45:22-28. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Sun, Lei; Dimitromanolakis, Apostolos
2014-01-01
Pedigree errors and cryptic relatedness often appear in families or population samples collected for genetic studies. If not identified, these issues can lead to either increased false negatives or false positives in both linkage and association analyses. To identify pedigree errors and cryptic relatedness among individuals from the 20 San Antonio Family Studies (SAFS) families and cryptic relatedness among the 157 putatively unrelated individuals, we apply PREST-plus to the genome-wide single-nucleotide polymorphism (SNP) data and analyze estimated identity-by-descent (IBD) distributions for all pairs of genotyped individuals. Based on the given pedigrees alone, PREST-plus identifies the following putative pairs: 1091 full-sib, 162 half-sib, 360 grandparent-grandchild, 2269 avuncular, 2717 first cousin, 402 half-avuncular, 559 half-first cousin, 2 half-sib+first cousin, 957 parent-offspring and 440,546 unrelated. Using the genotype data, PREST-plus detects 7 mis-specified relative pairs, with their IBD estimates clearly deviating from the null expectations, and it identifies 4 cryptic related pairs involving 7 individuals from 6 families.
Context-sensitive extraction of tree crown objects in urban areas using VHR satellite images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Bijker, Wietske; Tolpekin, Valentyn A.; Stein, Alfred
2012-04-01
Municipalities need accurate and updated inventories of urban vegetation in order to manage green resources and estimate their return on investment in urban forestry activities. Earlier studies have shown that semi-automatic tree detection using remote sensing is a challenging task. This study aims to develop a reproducible geographic object-based image analysis (GEOBIA) methodology to locate and delineate tree crowns in urban areas using high resolution imagery. We propose a GEOBIA approach that considers the spectral, spatial and contextual characteristics of tree objects in the urban space. The study presents classification rules that exploit object features at multiple segmentation scales modifying the labeling and shape of image-objects. The GEOBIA methodology was implemented on QuickBird images acquired over the cities of Enschede and Delft (The Netherlands), resulting in an identification rate of 70% and 82% respectively. False negative errors concentrated on small trees and false positive errors in private gardens. The quality of crown boundaries was acceptable, with an overall delineation error <0.24 outside of gardens and backyards.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Özdemir, Vural; Springer, Simon
2018-03-01
Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.
Neural network for photoplethysmographic respiratory rate monitoring
NASA Astrophysics Data System (ADS)
Johansson, Anders
2001-10-01
The photoplethysmographic signal (PPG) includes respiratory components seen as frequency modulation of the heart rate (respiratory sinus arrhythmia, RSA), amplitude modulation of the cardiac pulse, and respiratory induced intensity variations (RIIV) in the PPG baseline. The aim of this study was to evaluate the accuracy of these components in determining respiratory rate, and to combine the components in a neural network for improved accuracy. The primary goal is to design a PPG ventilation monitoring system. PPG signals were recorded from 15 healthy subjects. From these signals, the systolic waveform, diastolic waveform, respiratory sinus arrhythmia, pulse amplitude and RIIV were extracted. By using simple algorithms, the rates of false positive and false negative detection of breaths were calculated for each of the five components in a separate analysis. Furthermore, a simple neural network (NN) was tried out in a combined pattern recognition approach. In the separate analysis, the error rates (sum of false positives and false negatives) ranged from 9.7% (pulse amplitude) to 14.5% (systolic waveform). The corresponding value of the NN analysis was 9.5-9.6%.
Imperfect pathogen detection from non-invasive skin swabs biases disease inference
DiRenzo, Graziella V.; Grant, Evan H. Campbell; Longo, Ana; Che-Castaldo, Christian; Zamudio, Kelly R.; Lips, Karen
2018-01-01
1. Conservation managers rely on accurate estimates of disease parameters, such as pathogen prevalence and infection intensity, to assess disease status of a host population. However, these disease metrics may be biased if low-level infection intensities are missed by sampling methods or laboratory diagnostic tests. These false negatives underestimate pathogen prevalence and overestimate mean infection intensity of infected individuals. 2. Our objectives were two-fold. First, we quantified false negative error rates of Batrachochytrium dendrobatidis on non-invasive skin swabs collected from an amphibian community in El Copé, Panama. We swabbed amphibians twice in sequence, and we used a recently developed hierarchical Bayesian estimator to assess disease status of the population. Second, we developed a novel hierarchical Bayesian model to simultaneously account for imperfect pathogen detection from field sampling and laboratory diagnostic testing. We evaluated the performance of the model using simulations and varying sampling design to quantify the magnitude of bias in estimates of pathogen prevalence and infection intensity. 3. We show that Bd detection probability from skin swabs was related to host infection intensity, where Bd infections < 10 zoospores have < 95% probability of being detected. If imperfect Bd detection was not considered, then Bd prevalence was underestimated by as much as 16%. In the Bd-amphibian system, this indicates a need to correct for imperfect pathogen detection caused by skin swabs in persisting host communities with low-level infections. More generally, our results have implications for study designs in other disease systems, particularly those with similar objectives, biology, and sampling decisions. 4. Uncertainty in pathogen detection is an inherent property of most sampling protocols and diagnostic tests, where the magnitude of bias depends on the study system, type of infection, and false negative error rates. Given that it may be difficult to know this information in advance, we advocate that the most cautious approach is to assume all errors are possible and to accommodate them by adjusting sampling designs. The modeling framework presented here improves the accuracy in estimating pathogen prevalence and infection intensity.
Brébion, G; Ohlsen, R I; Bressan, R A; David, A S
2012-12-01
Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
Saeed, Mohammad
2017-05-01
Systemic lupus erythematosus (SLE) is a complex disorder. Genetic association studies of complex disorders suffer from the following three major issues: phenotypic heterogeneity, false positive (type I error), and false negative (type II error) results. Hence, genes with low to moderate effects are missed in standard analyses, especially after statistical corrections. OASIS is a novel linkage disequilibrium clustering algorithm that can potentially address false positives and negatives in genome-wide association studies (GWAS) of complex disorders such as SLE. OASIS was applied to two SLE dbGAP GWAS datasets (6077 subjects; ∼0.75 million single-nucleotide polymorphisms). OASIS identified three known SLE genes viz. IFIH1, TNIP1, and CD44, not previously reported using these GWAS datasets. In addition, 22 novel loci for SLE were identified and the 5 SLE genes previously reported using these datasets were verified. OASIS methodology was validated using single-variant replication and gene-based analysis with GATES. This led to the verification of 60% of OASIS loci. New SLE genes that OASIS identified and were further verified include TNFAIP6, DNAJB3, TTF1, GRIN2B, MON2, LATS2, SNX6, RBFOX1, NCOA3, and CHAF1B. This study presents the OASIS algorithm, software, and the meta-analyses of two publicly available SLE GWAS datasets along with the novel SLE genes. Hence, OASIS is a novel linkage disequilibrium clustering method that can be universally applied to existing GWAS datasets for the identification of new genes.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Expanded newborn metabolic screening programme in Hong Kong: a three-year journey.
Chong, S C; Law, L K; Hui, J; Lai, C Y; Leung, T Y; Yuen, Y P
2017-10-01
No universal expanded newborn screening service for inborn errors of metabolism is available in Hong Kong despite its long history in developed western countries and rapid development in neighbouring Asian countries. To increase the local awareness and preparedness, the Centre of Inborn Errors of Metabolism of the Chinese University of Hong Kong started a private inborn errors of metabolism screening programme in July 2013. This study aimed to describe the results and implementation of this screening programme. We retrieved the demographics of the screened newborns and the screening results from July 2013 to July 2016. These data were used to calculate quality metrics such as call-back rate and false-positive rate. Clinical details of true-positive and false-negative cases and their outcomes were described. Finally, the call-back logistics for newborns with positive screening results were reviewed. During the study period, 30 448 newborns referred from 13 private and public units were screened. Of the samples, 98.3% were collected within 7 days of life. The overall call-back rate was 0.128% (39/30 448) and the false-positive rate was 0.105% (32/30 448). Six neonates were confirmed to have inborn errors of metabolism, including two cases of medium-chain acyl-coenzyme A dehydrogenase deficiency, one case of carnitine-acylcarnitine translocase deficiency, and three milder conditions. One case of maternal carnitine uptake defect was diagnosed. All patients remained asymptomatic at their last follow-up. The Centre of Inborn Errors of Metabolism has established a comprehensive expanded newborn screening programme for selected inborn errors of metabolism. It sets a standard against which the performance of other private newborn screening tests can be compared. Our experience can also serve as a reference for policymakers when they contemplate establishing a government-funded universal expanded newborn screening programme in the future.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Leiva, Josue Nahun; Robbins, James; Saraswat, Dharmendra; She, Ying; Ehsani, Reza
2017-07-01
This study evaluated the effect of flight altitude and canopy separation of container-grown Fire Chief™ arborvitae (Thuja occidentalis L.) on counting accuracy. Images were taken at 6, 12, and 22 m above the ground using unmanned aircraft systems. Plants were spaced to achieve three canopy separation treatments: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. Plants were placed on two different ground covers: black fabric and gravel. A counting algorithm was trained using Feature Analyst®. Total counting error, false positives, and unidentified plants were reported for images analyzed. In general, total counting error was smaller when plants were fully separated. The effect of ground cover on counting accuracy varied with the counting algorithm. Total counting error for plants placed on gravel (-8) was larger than for those on a black fabric (-2), however, false positive counts were similar for black fabric (6) and gravel (6). Nevertheless, output images of plants placed on gravel did not show a negative effect due to the ground cover but was impacted by differences in image spatial resolution.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E
2016-01-01
Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.
NASA Astrophysics Data System (ADS)
Park, So-Hyun; Lee, Dong-Soo; Lee, Yun-Hee; Lee, Seu-Ran; Kim, Min-Ju; Suh, Tae-Suk
2015-09-01
The aim of this work is to demonstrate both the physical and the biological quality assurance (QA) aspects as pretreatment QA of the head and neck (H&N) cancer plan for the volumetric modulated arc therapy (VMAT). Ten H&N plans were studied. The COMPASS® dosimetry analysis system and the tumor control probability (TCP) and the normal tissue complication probability (NTCP) calculation free program were used as the respective measurement and calculation tools. The reliability of these tools was verified by a benchmark study in accordance with the TG-166 report. For the physical component of QA, the gamma passing rates and the false negative cases between the calculated and the measured data were evaluated. The biological component of QA was performed based on the equivalent uniform dose (EUD), TCP and NTCP values. The evaluation was performed for the planning target volumes (PTVs) and the organs at risks (OARs), including the eyes, the lens, the parotid glands, the esophagus, the spinal cord, and the brainstem. All cases had gamma passing rates above 95% at an acceptance tolerance level with the 3%/3 mm criteria. In addition, the false negative instances were presented for the PTVs and OARs. The gamma passing rates exhibited a weak correlation with false negative cases. For the biological QA, the physical dose errors affect the EUD and the TCP for the PTVs, but no linear correlation existed between them. The EUD and NTCP for the OARs were shown the random differences that could not be attributed to the dose errors from the physical QA. The differences in the EUD and NTCP between the calculated and the measured results were mainly demonstrated for the parotid glands. This study describes the importance and the necessity of improved QA to accompany both the physical and the biological aspects for accurate radiation treatment.
Robust Detection of Rare Species Using Environmental DNA: The Importance of Primer Specificity
Wilcox, Taylor M.; McKelvey, Kevin S.; Young, Michael K.; Jane, Stephen F.; Lowe, Winsor H.; Whiteley, Andrew R.; Schwartz, Michael K.
2013-01-01
Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method’s sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design. PMID:23555689
Robust detection of rare species using environmental DNA: the importance of primer specificity.
Wilcox, Taylor M; McKelvey, Kevin S; Young, Michael K; Jane, Stephen F; Lowe, Winsor H; Whiteley, Andrew R; Schwartz, Michael K
2013-01-01
Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method's sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Anti-retroviral therapy-induced status epilepticus in "pseudo-HIV serodeconversion".
Etgen, Thorleif; Eberl, Bernhard; Freudenberger, Thomas
2010-01-01
Diligence in the interpretation of results is essential as information gained from the psychiatric patient's history might often be restricted. Nonobservance of established guidelines may lead to a wrong diagnosis, induce a false therapy and result in life-threatening situations. Communication errors between hospitals and doctors and uncritical acceptance of prior diagnoses add substantially to this problem. We present a patient with alcohol-related dementia who received anti-retroviral therapy that promoted a non-convulsive status epilepticus. HIV serodeconversion was considered after our laboratory result yielded a HIV-negative status. Critical review of previous diagnostic investigations revealed several errors in the diagnosis of HIV infection leading to a "pseudo-serodeconversion." Finally, anti-retroviral therapy could be discontinued. Copyright © 2010 Elsevier Inc. All rights reserved.
Contingent negative variation (CNV) associated with sensorimotor timing error correction.
Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk
2016-02-15
Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.
Emotional false memories in children with learning disabilities.
Mirandola, Chiara; Losito, Nunzia; Ghetti, Simona; Cornoldi, Cesare
2014-02-01
Research has shown that children with learning disabilities (LD) are less prone to evince associative illusions of memory as a result of impairments in their ability to engage in semantic processing. However, it is unclear whether this observation is true for scripted life events, especially if they include emotional content, or across a broad spectrum of learning disabilities. The present study addressed these issues by assessing recognition memory for script-like information in children with nonverbal learning disability (NLD), children with dyslexia, and typically developing children (N=51). Participants viewed photographs about 8 common events (e.g., family dinner), and embedded in each episode was either a negative or a neutral consequence of an unseen action. Children's memory was then tested on a yes/no recognition task that included old and new photographs. Results showed that the three groups performed similarly in recognizing target photographs, but exhibited differences in memory errors. Compared to other groups, children with NLD were more likely to falsely recognize photographs that depicted an unseen cause of an emotional seen event and associated more "Remember" responses to these errors. Children with dyslexia were equally likely to falsely recognize both unseen causes of seen photographs and photographs generally consistent with the script, whereas the other participant groups were more likely to falsely recognize unseen causes rather than script-consistent distractors. Results are interpreted in terms of mechanisms underlying false memories' formation in different clinical populations of children with LD. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.
2013-01-01
Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…
Role-modeling and medical error disclosure: a national survey of trainees.
Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani
2014-03-01
To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.
Amelogenin test: From forensics to quality control in clinical and biochemical genomics.
Francès, F; Portolés, O; González, J I; Coltell, O; Verdú, F; Castelló, A; Corella, D
2007-01-01
The increasing number of samples from the biomedical genetic studies and the number of centers participating in the same involves increasing risk of mistakes in the different sample handling stages. We have evaluated the usefulness of the amelogenin test for quality control in sample identification. Amelogenin test (frequently used in forensics) was undertaken on 1224 individuals participating in a biomedical study. Concordance between referred sex in the database and amelogenin test was estimated. Additional sex-error genetic detecting systems were developed. The overall concordance rate was 99.84% (1222/1224). Two samples showed a female amelogenin test outcome, being codified as males in the database. The first, after checking sex-specific biochemical and clinical profile data was found to be due to a codification error in the database. In the second, after checking the database, no apparent error was discovered because a correct male profile was found. False negatives in amelogenin male sex determination were discarded by additional tests, and feminine sex was confirmed. A sample labeling error was revealed after a new DNA extraction. The amelogenin test is a useful quality control tool for detecting sex-identification errors in large genomic studies, and can contribute to increase its validity.
Hill, Kaylin E; Samuel, Douglas B; Foti, Dan
2016-08-01
The error-related negativity (ERN) is a neural measure of error processing that has been implicated as a neurobehavioral trait and has transdiagnostic links with psychopathology. Few studies, however, have contextualized this traitlike component with regard to dimensions of personality that, as intermediate constructs, may aid in contextualizing links with psychopathology. Accordingly, the aim of this study was to examine the interrelationships between error monitoring and dimensions of personality within a large adult sample (N = 208). Building on previous research, we found that the ERN relates to a combination of negative affect, impulsivity, and conscientiousness. At low levels of conscientiousness, negative urgency (i.e., impulsivity in the context of negative affect) predicted an increased ERN; at high levels of conscientiousness, the effect of negative urgency was not significant. This relationship was driven specifically by the conscientiousness facets of competence, order, and deliberation. Links between personality measures and error positivity amplitude were weaker and nonsignificant. Post-error slowing was also related to conscientiousness, as well as a different facet of impulsivity: lack of perseverance. These findings suggest that, in the general population, error processing is modulated by the joint combination of negative affect, impulsivity, and conscientiousness (i.e., the profile across traits), perhaps more so than any one dimension alone. This work may inform future research concerning aberrant error processing in clinical populations. © 2016 Society for Psychophysiological Research.
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Rogel-Castillo, Cristian; Boulton, Roger; Opastpongkarn, Arunwong; Huang, Guangwei; Mitchell, Alyson E
2016-07-27
Concealed damage (CD) is defined as a brown discoloration of the kernel interior (nutmeat) that appears only after moderate to high heat treatment (e.g., blanching, drying, roasting, etc.). Raw almonds with CD have no visible defects before heat treatment. Currently, there are no screening methods available for detecting CD in raw almonds. Herein, the feasibility of using near-infrared (NIR) spectroscopy between 1125 and 2153 nm for the detection of CD in almonds is demonstrated. Almond kernels with CD have less NIR absorbance in the region related with oil, protein, and carbohydrates. With the use of partial least squares discriminant analysis (PLS-DA) and selection of specific wavelengths, three classification models were developed. The calibration models have false-positive and false-negative error rates ranging between 12.4 and 16.1% and between 10.6 and 17.2%, respectively. The percent error rates ranged between 8.2 and 9.2%. Second-derivative preprocessing of the selected wavelength resulted in the most robust predictive model.
Error-Related Psychophysiology and Negative Affect
ERIC Educational Resources Information Center
Hajcak, G.; McDonald, N.; Simons, R.F.
2004-01-01
The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…
Layfield, Eleanor M; Schmidt, Robert L; Esebua, Magda; Layfield, Lester J
2018-06-01
Frozen section is routinely used for intraoperative margin evaluation in carcinomas of the head and neck. We studied a series of frozen sections performed for margin status of head and neck tumors to determine diagnostic accuracy. All frozen sections for margin control of squamous carcinomas of the head and neck were studied from a 66 month period. Frozen and permanent section diagnoses were classified as negative or malignant. Correlation of diagnoses was performed to determine accuracy. One thousand seven hundred and ninety-six pairs of frozen section and corresponding permanent section diagnoses were obtained. Discordances were found in 55 (3.1%) pairs. In 35 pairs (1.9%), frozen section was reported as benign, but permanent sections disclosed carcinoma. In 21 cases, the discrepancy was due to sampling and in the remaining cases it was an interpretive error. In 20 cases (1.1%), frozen section was malignant, but the permanent section was interpreted as negative. Frozen section is an accurate method for evaluation of operative margins for head and neck carcinomas with concordance between frozen and permanent results of 97%. Most errors are false negative results with the majority of these being due to sampling issues.
Post-error Brain Activity Correlates With Incidental Memory for Negative Words
Senderecka, Magdalena; Ociepka, Michał; Matyjek, Magdalena; Kroczek, Bartłomiej
2018-01-01
The present study had three main objectives. First, we aimed to evaluate whether short-duration affective states induced by negative and positive words can lead to increased error-monitoring activity relative to a neutral task condition. Second, we intended to determine whether such an enhancement is limited to words of specific valence or is a general response to arousing material. Third, we wanted to assess whether post-error brain activity is associated with incidental memory for negative and/or positive words. Participants performed an emotional stop-signal task that required response inhibition to negative, positive or neutral nouns while EEG was recorded. Immediately after the completion of the task, they were instructed to recall as many of the presented words as they could in an unexpected free recall test. We observed significantly greater brain activity in the error-positivity (Pe) time window in both negative and positive trials. The error-related negativity amplitudes were comparable in both the neutral and emotional arousing trials, regardless of their valence. Regarding behavior, increased processing of emotional words was reflected in better incidental recall. Importantly, the memory performance for negative words was positively correlated with the Pe amplitude, particularly in the negative condition. The source localization analysis revealed that the subsequent memory recall for negative words was associated with widespread bilateral brain activity in the dorsal anterior cingulate cortex and in the medial frontal gyrus, which was registered in the Pe time window during negative trials. The present study has several important conclusions. First, it indicates that the emotional enhancement of error monitoring, as reflected by the Pe amplitude, may be induced by stimuli with symbolic, ontogenetically learned emotional significance. Second, it indicates that the emotion-related enhancement of the Pe occurs across both negative and positive conditions, thus it is preferentially driven by the arousal content of an affective stimuli. Third, our findings suggest that enhanced error monitoring and facilitated recall of negative words may both reflect responsivity to negative events. More speculatively, they can also indicate that post-error activity of the medial prefrontal cortex may selectively support encoding for negative stimuli and contribute to their privileged access to memory. PMID:29867408
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials
Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels
2013-01-01
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212
Mood, motivation, and misinformation: aging and affective state influences on memory.
Hess, Thomas M; Popham, Lauren E; Emery, Lisa; Elliott, Tonya
2012-01-01
Normative age differences in memory have typically been attributed to declines in basic cognitive and cortical mechanisms. The present study examined the degree to which dominant everyday affect might also be associated with age-related memory errors using the misinformation paradigm. Younger and older adults viewed a positive and a negative event, and then were exposed to misinformation about each event. Older adults exhibited a higher likelihood than young adults of falsely identifying misinformation as having occurred in the events. Consistent with expectations, strength of the misinformation effect was positively associated with dominant mood, and controlling for mood eliminated any age effects. Also, motivation to engage in complex cognitive activity was negatively associated with susceptibility to misinformation, and susceptibility was stronger for negative than for positive events. We argue that motivational processes underlie all of the observed effects, and that such processes are useful in understanding age differences in memory performance.
Mood, motivation, and misinformation: Aging and affective state influences on memory
Hess, Thomas M.; Popham, Lauren E.; Emery, Lisa; Elliott, Tonya
2014-01-01
Normative age differences in memory have typically been attributed to declines in basic cognitive and cortical mechanisms. The present study examined the degree to which dominant everyday affect might also be associated with age-related memory errors using the misinformation paradigm. Younger and older adults viewed a positive and a negative event, and then were exposed to misinformation about each event. Older adults exhibited a higher likelihood than young adults of falsely identifying misinformation as having occurred in the events. Consistent with expectations, strength of the misinformation effect was positively associated with dominant mood, and controlling for mood eliminated any age effects. Also, motivation to engage in complex cognitive activity was negatively associated with susceptibility to misinformation, and susceptibility was stronger for negative than for positive events. We argue that motivational processes underlie all of the observed effects, and that such processes are useful in understanding age differences in memory performance. PMID:22059441
The Distinctions of False and Fuzzy Memories.
ERIC Educational Resources Information Center
Schooler, Jonathan W.
1998-01-01
Notes that fuzzy-trace theory has been used to understand false memories of children. Demonstrates the irony imbedded in the theory, maintaining that a central implication of fuzzy-trace theory is that some errors characterized as false memories are not really false at all. These errors, when applied to false alarms to related lures, are best…
Farwell, Lawrence A; Richardson, Drew C; Richardson, Graham M
2013-08-01
Brain fingerprinting detects concealed information stored in the brain by measuring brainwave responses. We compared P300 and P300-MERMER event-related brain potentials for error rate/accuracy and statistical confidence in four field/real-life studies. 76 tests detected presence or absence of information regarding (1) real-life events including felony crimes; (2) real crimes with substantial consequences (either a judicial outcome, i.e., evidence admitted in court, or a $100,000 reward for beating the test); (3) knowledge unique to FBI agents; and (4) knowledge unique to explosives (EOD/IED) experts. With both P300 and P300-MERMER, error rate was 0 %: determinations were 100 % accurate, no false negatives or false positives; also no indeterminates. Countermeasures had no effect. Median statistical confidence for determinations was 99.9 % with P300-MERMER and 99.6 % with P300. Brain fingerprinting methods and scientific standards for laboratory and field applications are discussed. Major differences in methods that produce different results are identified. Markedly different methods in other studies have produced over 10 times higher error rates and markedly lower statistical confidences than those of these, our previous studies, and independent replications. Data support the hypothesis that accuracy, reliability, and validity depend on following the brain fingerprinting scientific standards outlined herein.
Accounting for heterogeneous treatment effects in the FDA approval process.
Malani, Anup; Bembom, Oliver; van der Laan, Mark
2012-01-01
The FDA employs an average-patient standard when reviewing drugs: it approves a drug only if is safe and effective for the average patient in a clinical trial. It is common, however, for patients to respond differently to a drug. Therefore, the average-patient standard can reject a drug that benefits certain patient subgroups (false negatives) and even approve a drug that harms other patient subgroups (false positives). These errors increase the cost of drug development - and thus health care - by wasting research on unproductive or unapproved drugs. The reason why the FDA sticks with an average patient standard is concern about opportunism by drug companies. With enough data dredging, a drug company can always find some subgroup of patients that appears to benefit from its drug, even if the subgroup truly does not. In this paper we offer alternatives to the average patient standard that reduce the risk of false negatives without increasing false positives from drug company opportunism. These proposals combine changes to institutional design - evaluation of trial data by an independent auditor - with statistical tools to reinforce the new institutional design - specifically, to ensure the auditor is truly independent of drug companies. We illustrate our proposals by applying them to the results of a recent clinical trial of a cancer drug (motexafin gadolinium). Our analysis suggests that the FDA may have made a mistake in rejecting that drug.
Jeremiah, S S; Balaji, V; Anandan, S; Sahni, R D
2014-01-01
The modified Hodge test (MHT) is widely used as a screening test for the detection of carbapenemases in Gram-negative bacteria. This test has several pitfalls in terms of validity and interpretation. Also the test has a very low sensitivity in detecting the New Delhi metallo-β-lactamase (NDM). Considering the degree of dissemination of the NDM and the growing pandemic of carbapenem resistance, a more accurate alternative test is needed at the earliest. The study intends to compare the performance of the MHT with the commercially available Neo-Sensitabs - Carbapenemases/Metallo-β-Lactamase (MBL) Confirmative Identification pack to find out whether the latter could be an efficient alternative to the former. A total of 105 isolates of Klebsiella pneumoniae resistant to imipenem and meropenem, collected prospectively over a period of 2 years were included in the study. The study isolates were tested with the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack and polymerase chain reaction (PCR) for detecting the blaNDM-1 gene. Among the 105 isolates, the MHT identified 100 isolates as carbapenemase producers. In the five isolates negative for the MHT, four were found to produce MBLs by the Neo-Sensitabs. The Neo-Sensitabs did not have any false negatives when compared against the PCR. The MHT can give false negative results, which lead to failure in detecting the carbapenemase producers. Also considering the other pitfalls of the MHT, the Neo-Sensitabs--Carbapenemases/MBL Confirmative Identification pack could be a more efficient alternative for detection of carbapenemase production in Gram-negative bacteria.
Imberger, Georgina; Thorlund, Kristian; Gluud, Christian; Wetterslev, Jørn
2016-08-12
Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
The Neural Basis of Error Detection: Conflict Monitoring and the Error-Related Negativity
ERIC Educational Resources Information Center
Yeung, Nick; Botvinick, Matthew M.; Cohen, Jonathan D.
2004-01-01
According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an…
Neural evidence for enhanced error detection in major depressive disorder.
Chiu, Pearl H; Deldin, Patricia J
2007-04-01
Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.
Krueger, Joachim I; Funder, David C
2004-06-01
Mainstream social psychology focuses on how people characteristically violate norms of action through social misbehaviors such as conformity with false majority judgments, destructive obedience, and failures to help those in need. Likewise, they are seen to violate norms of reasoning through cognitive errors such as misuse of social information, self-enhancement, and an over-readiness to attribute dispositional characteristics. The causes of this negative research emphasis include the apparent informativeness of norm violation, the status of good behavior and judgment as unconfirmable null hypotheses, and the allure of counter-intuitive findings. The shortcomings of this orientation include frequently erroneous imputations of error, findings of mutually contradictory errors, incoherent interpretations of error, an inability to explain the sources of behavioral or cognitive achievement, and the inhibition of generalized theory. Possible remedies include increased attention to the complete range of behavior and judgmental accomplishment, analytic reforms emphasizing effect sizes and Bayesian inference, and a theoretical paradigm able to account for both the sources of accomplishment and of error. A more balanced social psychology would yield not only a more positive view of human nature, but also an improved understanding of the bases of good behavior and accurate judgment, coherent explanations of occasional lapses, and theoretically grounded suggestions for improvement.
Short communication: Prediction of retention pay-off using a machine learning algorithm.
Shahinfar, Saleh; Kalantari, Afshin S; Cabrera, Victor; Weigel, Kent
2014-05-01
Replacement decisions have a major effect on dairy farm profitability. Dynamic programming (DP) has been widely studied to find the optimal replacement policies in dairy cattle. However, DP models are computationally intensive and might not be practical for daily decision making. Hence, the ability of applying machine learning on a prerun DP model to provide fast and accurate predictions of nonlinear and intercorrelated variables makes it an ideal methodology. Milk class (1 to 5), lactation number (1 to 9), month in milk (1 to 20), and month of pregnancy (0 to 9) were used to describe all cows in a herd in a DP model. Twenty-seven scenarios based on all combinations of 3 levels (base, 20% above, and 20% below) of milk production, milk price, and replacement cost were solved with the DP model, resulting in a data set of 122,716 records, each with a calculated retention pay-off (RPO). Then, a machine learning model tree algorithm was used to mimic the evaluated RPO with DP. The correlation coefficient factor was used to observe the concordance of RPO evaluated by DP and RPO predicted by the model tree. The obtained correlation coefficient was 0.991, with a corresponding value of 0.11 for relative absolute error. At least 100 instances were required per model constraint, resulting in 204 total equations (models). When these models were used for binary classification of positive and negative RPO, error rates were 1% false negatives and 9% false positives. Applying this trained model from simulated data for prediction of RPO for 102 actual replacement records from the University of Wisconsin-Madison dairy herd resulted in a 0.994 correlation with 0.10 relative absolute error rate. Overall results showed that model tree has a potential to be used in conjunction with DP to assist farmers in their replacement decisions. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Kwon, Hyuk-Jin; Yoo, Hee-Jeong; Kim, Joo-Hyun; Noh, Dong-Hyun; Sunwoo, Hyun-Jung; Jeon, Ye Seul; Lee, Sang-Youn; Jo, Ye-Ul; Bong, Gui-Young
2017-10-01
The current cut-off score of the Korean version of the Childhood Autism Rating Scale (K-CARS) does not seem to be sensitive enough to precisely diagnose high-functioning autism. The aim of this study was to identify the optimal cut-off score of K-CARS for diagnosing high-functioning individuals with autism spectrum disorders (ASD). A total of 329 participants were assessed by the Korean versions of the Autism Diagnostic Interview - Revised (K-ADI-R), Autism Diagnostic Observation Schedule (K-ADOS), and K-CARS. IQ and Social Maturity Scale scores were also obtained. The true positive and false negative rates of K-CARS were 77.2% and 22.8%, respectively. Verbal IQ (VIQ) and Social Quotient (SQ) were significant predictors of misclassification. The false negative rate increased to 36.0% from 19.8% when VIQ was >69.5, and the rate increased to 44.1% for participants with VIQ > 69.5 and SQ > 75.5. In addition, if SQ was >83.5, the false negative rate increased to 46.7%, even if the participant's VIQ was ≤69.5. Optimal cut-off scores were 28.5 (for VIQ ≤ 69.5 and SQ ≤ 75.5), 24.25 (for VIQ > 69.5 and SQ > 75.5), and 24.5 (for SQ > 83.5), respectively. The likelihood of a false negative error increases when K-CARS is used to diagnose high-functioning autism and Asperger's syndrome. For subjects with ASD and substantial verbal ability, the cut-off score for K-CARS should be re-adjusted and/or supplementary diagnostic tools might be needed to enhance diagnostic accuracy for ASD. © 2017 The Authors. Psychiatry and Clinical Neurosciences © 2017 Japanese Society of Psychiatry and Neurology.
Almannai, Mohammed; Marom, Ronit; Sutton, V Reid
2016-12-01
The purpose of this review is to summarize the development and recent advancements of newborn screening. Early initiation of medical care has modified the outcome for many disorders that were previously associated with high morbidity (such as cystic fibrosis, primary immune deficiencies, and inborn errors of metabolism) or with significant neurodevelopmental disabilities (such as phenylketonuria and congenital hypothyroidism). The new era of mass spectrometry and next generation sequencing enables the expansion of the newborn screen panel, and will help to address technical issues such as turnaround time, and decreasing false-positive and false-negative rates for the testing. The newborn screening program is a successful public health initiative that facilitates early diagnosis of treatable disorders to reduce long-term morbidity and mortality.
A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation
Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.
2014-01-01
Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293
Tissot, F; Prod'hom, G; Manuel, O; Greub, G
2015-09-01
The impact of round-the-clock cerebrospinal fluid (CSF) Gram stain on overnight empirical therapy for suspected central nervous system (CNS) infections was investigated. All consecutive overnight CSF Gram stains between 2006 and 2011 were included. The impact of a positive or a negative test on empirical therapy was evaluated and compared to other clinical and biological indications based on institutional guidelines. Bacterial CNS infection was documented in 51/241 suspected cases. Overnight CSF Gram stain was positive in 24/51. Upon validation, there were two false-positive and one false-negative results. The sensitivity and specificity were 41 and 99 %, respectively. All patients but one had other indications for empirical therapy than Gram stain alone. Upon obtaining the Gram result, empirical therapy was modified in 7/24, including the addition of an appropriate agent (1), addition of unnecessary agents (3) and simplification of unnecessary combination therapy (3/11). Among 74 cases with a negative CSF Gram stain and without formal indication for empirical therapy, antibiotics were withheld in only 29. Round-the-clock CSF Gram stain had a low impact on overnight empirical therapy for suspected CNS infections and was associated with several misinterpretation errors. Clinicians showed little confidence in CSF direct examination for simplifying or withholding therapy before definite microbiological results.
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
Finkelstein's test: a descriptive error that can produce a false positive.
Elliott, B G
1992-08-01
Over the last three decades an error in performing Finkelstein's test has crept into the English literature in both text books and journals. This error can produce a false-positive, and if relied upon, a wrong diagnosis can be made, leading to inappropriate surgery.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
Wu, Da-lin; Ling, Han-xin; Tang, Hao
2004-11-01
To evaluate the accuracy of PCR with sequence-specific primers (PCR-SSP) for HLA-I genotyping and analyze the causes of the errors occurring in the genotyping. DNA samples and were obtained from 34 clinical patients, and serological typing with monoclonal antibody (mAb) and HLA-A and, B antigen genotyping with PCR-SSP were performed. HLA-A and, B alleles were successfully typed in 34 clinical samples by mAb and PCR-SSP. No false positive or false negative results were found, and the erroneous and missed diagnosis rates were obviously higher in serological detection, being 23.5% for HLA-A and 26.5% for HLA-B. Error or confusion was more likely to occur in the antigens of A2 and A68, A32 and A33, B5, B60 and B61. DNA typing for HLA-I class (A, B antigens) by PCR-SSP has high resolution, high specificity, and good reproducibility, which is more suitable for clinical application than serological typing. PCR-SSP may accurately detect the alleles that are easily missed or mistaken in serological typing.
Consideration of species community composition in statistical ...
Diseases are increasing in marine ecosystems, and these increases have been attributed to a number of environmental factors including climate change, pollution, and overfishing. However, many studies pool disease prevalence into taxonomic groups, disregarding host species composition when comparing sites or assessing environmental impacts on patterns of disease presence. We used simulated data under a known environmental effect to assess the ability of standard statistical methods (binomial and linear regression, ANOVA) to detect a significant environmental effect on pooled disease prevalence with varying species abundance distributions and relative susceptibilities to disease. When one species was more susceptible to a disease and both species only partially overlapped in their distributions, models tended to produce a greater number of false positives (Type I error). Differences in disease risk between regions or along an environmental gradient tended to be underestimated, or even in the wrong direction, when highly susceptible taxa had reduced abundances in impacted sites, a situation likely to be common in nature. Including relative abundance as an additional variable in regressions improved model accuracy, but tended to be conservative, producing more false negatives (Type II error) when species abundance was strongly correlated with the environmental effect. Investigators should be cautious of underlying assumptions of species similarity in susceptib
An audit of intraoperative frozen section in Johor.
Khoo, J J
2004-03-01
A 4-year-review was carried out on intraoperative frozen section consultations in Sultanah Aminah Hospital, Johor Bahru. Two hundred and fifteen specimens were received from 79 patients in the period between January 1999 and December 2002. An average of 2.72 specimens per patient was received. The overall diagnostic accuracy was high, 97.56%. The diagnoses were deferred in 4.65% of the specimens. False positive diagnoses were made in 3 specimens (1.46%) and false negative diagnoses in 2 specimens (0.98%). This gave an error rate of 2.44%. The main cause of error was incorrect interpretation of the pathologic findings. In the present study, frozen sections showed good sensitivity (97.98%) and specificity (97.16%). Despite its limitations, frozen section is still generally considered to be an accurate mode of intraoperative consultation to assist the surgeon in deciding the best therapeutic approach for his patient at the operating table. The use of frozen section with proper indications was cost-effective as it helped lower the number of reoperations. An audit of intraoperative frozen section from time to time serves as part of an ongoing quality assurance program and should be recommended where the service is available.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Negatively-Biased Credulity and the Cultural Evolution of Beliefs
Fessler, Daniel M. T.; Pisor, Anne C.; Navarrete, Carlos David
2014-01-01
The functions of cultural beliefs are often opaque to those who hold them. Accordingly, to benefit from cultural evolution’s ability to solve complex adaptive problems, learners must be credulous. However, credulity entails costs, including susceptibility to exploitation, and effort wasted due to false beliefs. One determinant of the optimal level of credulity is the ratio between the costs of two types of errors: erroneous incredulity (failing to believe information that is true) and erroneous credulity (believing information that is false). This ratio can be expected to be asymmetric when information concerns hazards, as the costs of erroneous incredulity will, on average, exceed the costs of erroneous credulity; no equivalent asymmetry characterizes information concerning benefits. Natural selection can therefore be expected to have crafted learners’ minds so as to be more credulous toward information concerning hazards. This negatively-biased credulity extends general negativity bias, the adaptive tendency for negative events to be more salient than positive events. Together, these biases constitute attractors that should shape cultural evolution via the aggregated effects of learners’ differential retention and transmission of information. In two studies in the U.S., we demonstrate the existence of negatively-biased credulity, and show that it is most pronounced in those who believe the world to be dangerous, individuals who may constitute important nodes in cultural transmission networks. We then document the predicted imbalance in cultural content using a sample of urban legends collected from the Internet and a sample of supernatural beliefs obtained from ethnographies of a representative collection of the world’s cultures, showing that beliefs about hazards predominate in both. PMID:24736596
Negatively-biased credulity and the cultural evolution of beliefs.
Fessler, Daniel M T; Pisor, Anne C; Navarrete, Carlos David
2014-01-01
The functions of cultural beliefs are often opaque to those who hold them. Accordingly, to benefit from cultural evolution's ability to solve complex adaptive problems, learners must be credulous. However, credulity entails costs, including susceptibility to exploitation, and effort wasted due to false beliefs. One determinant of the optimal level of credulity is the ratio between the costs of two types of errors: erroneous incredulity (failing to believe information that is true) and erroneous credulity (believing information that is false). This ratio can be expected to be asymmetric when information concerns hazards, as the costs of erroneous incredulity will, on average, exceed the costs of erroneous credulity; no equivalent asymmetry characterizes information concerning benefits. Natural selection can therefore be expected to have crafted learners' minds so as to be more credulous toward information concerning hazards. This negatively-biased credulity extends general negativity bias, the adaptive tendency for negative events to be more salient than positive events. Together, these biases constitute attractors that should shape cultural evolution via the aggregated effects of learners' differential retention and transmission of information. In two studies in the U.S., we demonstrate the existence of negatively-biased credulity, and show that it is most pronounced in those who believe the world to be dangerous, individuals who may constitute important nodes in cultural transmission networks. We then document the predicted imbalance in cultural content using a sample of urban legends collected from the Internet and a sample of supernatural beliefs obtained from ethnographies of a representative collection of the world's cultures, showing that beliefs about hazards predominate in both.
Kish, Nicole E.; Helmuth, Brian; Wethey, David S.
2016-01-01
Models of ecological responses to climate change fundamentally assume that predictor variables, which are often measured at large scales, are to some degree diagnostic of the smaller-scale biological processes that ultimately drive patterns of abundance and distribution. Given that organisms respond physiologically to stressors, such as temperature, in highly non-linear ways, small modelling errors in predictor variables can potentially result in failures to predict mortality or severe stress, especially if an organism exists near its physiological limits. As a result, a central challenge facing ecologists, particularly those attempting to forecast future responses to environmental change, is how to develop metrics of forecast model skill (the ability of a model to predict defined events) that are biologically meaningful and reflective of underlying processes. We quantified the skill of four simple models of body temperature (a primary determinant of physiological stress) of an intertidal mussel, Mytilus californianus, using common metrics of model performance, such as root mean square error, as well as forecast verification skill scores developed by the meteorological community. We used a physiologically grounded framework to assess each model's ability to predict optimal, sub-optimal, sub-lethal and lethal physiological responses. Models diverged in their ability to predict different levels of physiological stress when evaluated using skill scores, even though common metrics, such as root mean square error, indicated similar accuracy overall. Results from this study emphasize the importance of grounding assessments of model skill in the context of an organism's physiology and, especially, of considering the implications of false-positive and false-negative errors when forecasting the ecological effects of environmental change. PMID:27729979
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Better or Worse than Expected? Aging, Learning, and the ERN
ERIC Educational Resources Information Center
Eppinger, Ben; Kray, Jutta; Mock, Barbara; Mecklinger, Axel
2008-01-01
This study examined age differences in error processing and reinforcement learning. We were interested in whether the electrophysiological correlates of error processing, the error-related negativity (ERN) and the feedback-related negativity (FRN), reflect learning-related changes in younger and older adults. To do so, we applied a probabilistic…
Helm, Rebecca K; Ceci, Stephen J; Burd, Kayla A
2016-11-01
Eyewitness identification has been shown to be fallible and prone to false memory. In this study we develop and test a new method to probe the mechanisms involved in the formation of false memories in this area, and determine whether a particular memory is likely to be true or false. We created a seven-step procedure based on the Implicit Association Test to gauge implicit biases in eyewitness identification (the IATe). We show that identification errors may result from unconscious bias caused by implicit associations evoked by a given face. We also show that implicit associations between negative attributions such as guilt and eyewitnesses' final pick from a line-up can help to distinguish between true and false memory (especially where the witness has been subject to the suggestive nature of a prior blank line-up). Specifically, the more a witness implicitly associates an individual face with a particular crime, the more likely it is that a memory they have for that person committing the crime is false. These findings are consistent with existing findings in the memory and neuroscience literature showing that false memories can be caused by implicit associations that are outside conscious awareness. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Punishment sensitivity modulates the processing of negative feedback but not error-induced learning.
Unger, Kerstin; Heintz, Sonja; Kray, Jutta
2012-01-01
Accumulating evidence suggests that individual differences in punishment and reward sensitivity are associated with functional alterations in neural systems underlying error and feedback processing. In particular, individuals highly sensitive to punishment have been found to be characterized by larger mediofrontal error signals as reflected in the error negativity/error-related negativity (Ne/ERN) and the feedback-related negativity (FRN). By contrast, reward sensitivity has been shown to relate to the error positivity (Pe). Given that Ne/ERN, FRN, and Pe have been functionally linked to flexible behavioral adaptation, the aim of the present research was to examine how these electrophysiological reflections of error and feedback processing vary as a function of punishment and reward sensitivity during reinforcement learning. We applied a probabilistic learning task that involved three different conditions of feedback validity (100%, 80%, and 50%). In contrast to prior studies using response competition tasks, we did not find reliable correlations between punishment sensitivity and the Ne/ERN. Instead, higher punishment sensitivity predicted larger FRN amplitudes, irrespective of feedback validity. Moreover, higher reward sensitivity was associated with a larger Pe. However, only reward sensitivity was related to better overall learning performance and higher post-error accuracy, whereas highly punishment sensitive participants showed impaired learning performance, suggesting that larger negative feedback-related error signals were not beneficial for learning or even reflected maladaptive information processing in these individuals. Thus, although our findings indicate that individual differences in reward and punishment sensitivity are related to electrophysiological correlates of error and feedback processing, we found less evidence for influences of these personality characteristics on the relation between performance monitoring and feedback-based learning.
Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.
Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A
2013-04-15
Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006
Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less
NASA Astrophysics Data System (ADS)
Peres, David J.; Cancelliere, Antonino; Greco, Roberto; Bogaard, Thom A.
2018-03-01
Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity-duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios
that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily). The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.
Porter, Stephen; Taylor, Kristian; Ten Brinke, Leanne
2008-01-01
Despite a large body of false memory research, little has addressed the potential influence of an event's emotional content on susceptibility to false recollections. The Paradoxical Negative Emotion (PNE) hypothesis predicts that negative emotion generally facilitates memory but also heightens susceptibility to false memories. Participants were asked whether they could recall 20 "widely publicised" public events (half fictitious) ranging in emotional valence, with or without visual cues. Participants recalled a greater number of true negative events (M=3.31/5) than true positive (M=2.61/5) events. Nearly everyone (95%) came to recall at least one false event (M=2.15 false events recalled). Further, more than twice as many participants recalled any false negative (90%) compared to false positive (41.7%) events. Negative events, in general, were associated with more detailed memories and false negative event memories were more detailed than false positive event memories. Higher dissociation scores were associated with false recollections of negative events, specifically.
Predictive error detection in pianists: a combined ERP and motion capture study
Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari
2013-01-01
Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID:24133428
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Limits on negative information in language input.
Morgan, J L; Travis, L L
1989-10-01
Hirsh-Pasek, Treiman & Schneiderman (1984) and Demetras, Post & Snow (1986) have recently suggested that certain types of parental repetitions and clarification questions may provide children with subtle cues to their grammatical errors. We further investigated this possibility by examining parental responses to inflectional over-regularizations and wh-question auxiliary-verb omission errors in the sets of transcripts from Adam, Eve and Sarah (Brown 1973). These errors were chosen because they are exemplars of overgeneralization, the type of mistake for which negative information is, in theory, most critically needed. Expansions and Clarification Questions occurred more often following ill-formed utterances in Adam's and Eve's input, but not in Sarah's. However, these corrective responses formed only a small proportion of all adult responses following Adam's and Eve's grammatical errors. Moreover, corrective responses appear to drop out of children's input while they continue to make overgeneralization errors. Whereas negative feedback may occasionally be available, in the light of these findings the contention that language input generally incorporates negative information appears to be unfounded.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won
2014-01-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188
Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.
Roy, Mononita; Molnar, Frank
2013-01-01
Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.
Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C
2016-09-01
The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.
Canadian drivers' attitudes regarding preventative responses to driving while impaired by alcohol.
Vanlaar, Ward; Nadeau, Louise; McKiernan, Anna; Hing, Marisela M; Ouimet, Marie Claude; Brown, Thomas G
2017-09-01
In many jurisdictions, a risk assessment following a first driving while impaired (DWI) offence is used to guide administrative decision making regarding driver relicensing. Decision error in this process has important consequences for public security on one hand, and the social and economic well being of drivers on the other. Decision theory posits that consideration of the costs and benefits of decision error is needed, and in the public health context, this should include community attitudes. The objective of the present study was to clarify whether Canadians prefer decision error that: i) better protects the public (i.e., false positives); or ii) better protects the offender (i.e., false negatives). A random sample of male and female adult drivers (N=1213) from the five most populated regions of Canada was surveyed on drivers' preference for a protection of the public approach versus a protection of DWI drivers approach in resolving assessment decision error, and the relative value (i.e., value ratio) they imparted to both approaches. The role of region, sex and age on drivers' value ratio were also appraised. Seventy percent of Canadian drivers preferred a protection of the public from DWI approach, with the overall relative ratio given to this preference, compared to the alternative protection of the driver approach, being 3:1. Females expressed a significantly higher value ratio (M=3.4, SD=3.5) than males (M=3.0, SD=3.4), p<0.05. Regression analysis showed that both days of alcohol use in the past 30days (CI for B: -0.07, -0.02) and frequency of driving over legal BAC limits in the past year (CI for B=-0.19, -0.01) were significantly but modestly related to lower value ratios, R 2 (adj.)=0.014, p<0.001. Regional differences were also detected. Canadian drivers strongly favour a protection of the public approach to dealing with uncertainty in assessment, even at the risk of false positives. Accounting for community attitudes concerning DWI prevention and the individual differences that influence them could contribute to more informed, coherent and effective regional policies and prevention program development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Crosby, Richard; Mena, Leandro; Yarber, William L.; Graham, Cynthia A.; Sanders, Stephanie A.; Milhausen, Robin R.
2015-01-01
Objective To describe self-reported frequencies of selected condom use errors and problems among young (ages 15–29) Black MSM (YBMSM) and to compare the observed prevalence of these errors/problems by HIV serostatus. Methods Between September 2012 October 2014, electronic interview data were collected from 369 YBMSM attending a federally supported STI clinic located in the southern U.S. Seventeen condom use errors and problems were assessed. Chi-square tests were used to detect significant differences in the prevalence of these 17 errors and problems between HIV-negative and HIV-positive men. Results The recall period was the past 90 days. The overall mean number of errors/problems was 2.98 (sd=2.29). The mean for HIV-negative men was 2.91 (sd=2.15) and the mean for HIV-positive men was 3.18 (sd=2.57). These means were not significantly different (t=1.02, df=367, P=.31). Only two significant differences were observed between HIV-negative and HIV-positive men. Breakage (P = .002) and slippage (P = .005) were about twice as likely among HIV-positive men. Breakage occurred for nearly 30% of the HIV-positive men compared to about 15% among HIV-negative men. Slippage occurred for about 16% of the HIV-positive men compared to about 9% among HIV-negative men. Conclusion A need exists to help YBMSM acquire the skills needed to avert breakage and slippage issues that could lead to HIV transmission. Beyond these two exceptions, condom use errors and problems were ubiquitous in this population regardless of HIV serostatus. Clinic-based intervention is warranted for these young men, including education about correct condom use and provision of free condoms and long-lasting lubricants. PMID:26462188
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting.
Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R
2016-06-27
Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.
Lin, Guigao; Zhang, Kuo; Zhang, Dong; Han, Yanxi; Xie, Jiehong; Li, Jinming
2017-03-01
The emergence of Zika virus demands accurate laboratory diagnostics. Nucleic acid testing is currently the definitive method for diagnosis of Zika infection. In 2016, an external quality assurance (EQA) for assessing the quality of molecular testing of Zika virus was carried out in China. A single armored RNA encapsulating a 4942-nucleotides (nt) long specific RNA sequence of Zika virus was prepared and used as positive samples. A pre-tested EQA panel, consisting of 4 negative and 6 positive samples with different concentrations of armored RNA, was distributed to 38 laboratories that perform molecular detection of Zika virus. A total of 39 data sets (1 laboratory used two test kits in parallel), produced by using commercial (n=38) or laboratory developed (n=1) quantitative reverse-transcriptase PCR (qRT-PCR) kits, were received. Of these, 35 (89.7%) had correct results for all 10 samples, and 4 (10.3%) reported at least 1 error (11 in total). The testing errors were all false-negatives, highlighting the need of improvements in detecting sensitivity. The EQA reveals that the majority of participating laboratories are proficient in molecular testing of Zika virus. Copyright © 2017 Elsevier B.V. All rights reserved.
Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting
NASA Astrophysics Data System (ADS)
Scherr, Thomas F.; Gupta, Sparsh; Wright, David W.; Haselton, Frederick R.
2016-06-01
Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.
Ashikaga, Takamaru; Harlow, Seth P.; Skelly, Joan M.; Julian, Thomas B.; Brown, Ann M.; Weaver, Donald L.; Wolmark, Norman
2009-01-01
Background The National Surgical Adjuvant Breast and Bowel Project B-32 trial was designed to determine whether sentinel lymph node resection can achieve the same therapeutic outcomes as axillary lymph node resection but with fewer side effects and is one of the most carefully controlled and monitored randomized trials in the field of surgical oncology. We evaluated the relationship of surgeon trial preparation, protocol compliance audit, and technical outcomes. Methods Preparation for this trial included a protocol manual, a site visit with key participants, an intraoperative session with the surgeon, and prerandomization documentation of protocol compliance. Training categories included surgeons who submitted material on five prerandomization surgeries and were trained by a core trainer (category 1) or by a site trainer (category 2). An expedited group (category 3) included surgeons with extensive experience who submitted material on one prerandomization surgery. At completion of training, surgeons could accrue patients. Two hundred twenty-four surgeons enrolled 4994 patients with breast cancer and were audited for 94 specific items in the following four categories: procedural, operative note, pathology report, and data entry. The relationship of training method; protocol compliance performance audit; and the technical outcomes of the sentinel lymph node resection rate, false-negative rate, and number of sentinel lymph nodes removed was determined. All statistical tests were two-sided. Results The overall sentinel lymph node resection success rate was 96.9% (95% confidence interval [CI] = 96.4% to 97.4%), and the overall false-negative rate was 9.5% (95% CI = 7.4% to 12.0%), with no statistical differences between training methods. Overall audit outcomes were excellent in all four categories. For all three training groups combined, a statistically significant positive association was observed between surgeons’ average number of procedural errors and their false-negative rate (ρ = +0.188, P = .021). Conclusions All three training methods resulted in uniform and high overall sentinel lymph node resection rates. Subgroup analyses identified some variation in false-negative rates that were related to audited outcome performance measures. PMID:19704072
Burnout is associated with changes in error and feedback processing.
Gajewski, Patrick D; Boden, Sylvia; Freude, Gabriele; Potter, Guy G; Falkenstein, Michael
2017-10-01
Burnout is a pattern of complaints in individuals with emotionally demanding jobs that is often seen as a precursor of depression. One often reported symptom of burnout is cognitive decline. To analyze cognitive control and to differentiate between subclinical burnout and mild to moderate depression a double-blinded study was conducted that investigates changes in the processing of performance errors and feedback in a task switching paradigm. Fifty-one of 76 employees from emotionally demanding jobs showed a sufficient number of errors to be included in the analysis. The sample was subdivided into groups with low (EE-) and high (EE+) emotional exhaustion and no (DE-) and mild to moderate depression (DE+). The behavioral data did not significantly differ between the groups. In contrast, in the EE+ group, the error negativity (Ne/ERN) was enhanced while the error positivity (Pe) did not differ between the EE+ and EE- groups. After negative feedback the feedback-related negativity (FRN) was enhanced, while the subsequent positivity (FRP) was reduced in EE+ relative to EE-. None of these effects were observed in the DE+ vs. DE-. These results suggest an upregulation of error and negative feedback processing, while the later processing of negative feedback was attenuated in employees with subclinical burnout but not in mild to moderate depression. Copyright © 2017 Elsevier B.V. All rights reserved.
Container weld identification using portable laser scanners
NASA Astrophysics Data System (ADS)
Taddei, Pierluigi; Boström, Gunnar; Puig, David; Kravtchenko, Victor; Sequeira, Vítor
2015-03-01
Identification and integrity verification of sealed containers for security applications can be obtained by employing noninvasive portable optical systems. We present a portable laser range imaging system capable of identifying welds, a byproduct of a container's physical sealing, with micrometer accuracy. It is based on the assumption that each weld has a unique three-dimensional (3-D) structure which cannot be copied or forged. We process the 3-D surface to generate a normalized depth map which is invariant to mechanical alignment errors and that is used to build compact signatures representing the weld. A weld is identified by performing cross correlations of its signature against a set of known signatures. The system has been tested on realistic datasets, containing hundreds of welds, yielding no false positives or false negatives and thus showing the robustness of the system and the validity of the chosen signature.
Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder
ERIC Educational Resources Information Center
Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.
2012-01-01
Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…
Computerized tongue image segmentation via the double geo-vector flow
2014-01-01
Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094
Computerized tongue image segmentation via the double geo-vector flow.
Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao
2014-02-08
Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.
Uclés, S; Lozano, A; Sosa, A; Parrilla Vázquez, P; Valverde, A; Fernández-Alba, A R
2017-11-01
Gas and liquid chromatography coupled to triple quadrupole tandem mass spectrometry are currently the most powerful tools employed for the routine analysis of pesticide residues in food control laboratories. However, whatever the multiresidue extraction method, there will be a residual matrix effect making it difficult to identify/quantify some specific compounds in certain cases. Two main effects stand out: (i) co-elution with isobaric matrix interferents, which can be a major drawback for unequivocal identification, and therefore false negative detections, and (ii) signal suppression/enhancement, commonly called the "matrix effect", which may cause serious problems including inaccurate quantitation, low analyte detectability and increased method uncertainty. The aim of this analytical study is to provide a framework for evaluating the maximum expected errors associated with the matrix effects. The worst-case study contrived to give an estimation of the extreme errors caused by matrix effects when extraction/determination protocols are applied in routine multiresidue analysis. Twenty-five different blank matrices extracted with the four most common extraction methods used in routine analysis (citrate QuEChERS with/without PSA clean-up, ethyl acetate and the Dutch mini-Luke "NL" methods) were evaluated by both GC-QqQ-MS/MS and LC-QqQ-MS/MS. The results showed that the presence of matrix compounds with isobaric transitions to target pesticides was higher in GC than under LC in the experimental conditions tested. In a second study, the number of "potential" false negatives was evaluated. For that, ten matrices with higher percentages of natural interfering components were checked. Additionally, the results showed that for more than 90% of the cases, pesticide quantification was not affected by matrix-matched standard calibration when an interferent was kept constant along the calibration curve. The error in quantification depended on the concentration level. In a third study, the "matrix effect" was evaluated for each commodity/extraction method. Results showed 44% of cases with suppression/enhancement for LC and 93% of cases with enhancement for GC. Copyright © 2017 Elsevier B.V. All rights reserved.
Accuracy of vaginal symptom self-diagnosis algorithms for deployed military women.
Ryan-Wenger, Nancy A; Neal, Jeremy L; Jones, Ashley S; Lowe, Nancy K
2010-01-01
Deployed military women have an increased risk for development of vaginitis due to extreme temperatures, primitive sanitation, hygiene and laundry facilities, and unavailable or unacceptable healthcare resources. The Women in the Military Self-Diagnosis (WMSD) and treatment kit was developed as a field-expedient solution to this problem. The primary study aims were to evaluate the accuracy of women's self-diagnosis of vaginal symptoms and eight diagnostic algorithms and to predict potential self-medication omission and commission error rates. Participants included 546 active duty, deployable Army (43.3%) and Navy (53.6%) women with vaginal symptoms who sought healthcare at troop medical clinics on base.In the clinic lavatory, women conducted a self-diagnosis using a sterile cotton swab to obtain vaginal fluid, a FemExam card to measure positive or negative pH and amines, and the investigator-developed WMSD Decision-Making Guide. Potential self-diagnoses were "bacterial infection" (bacterial vaginosis [BV] and/or trichomonas vaginitis [TV]), "yeast infection" (candida vaginitis [CV]), "no infection/normal," or "unclear." The Affirm VPIII laboratory reference standard was used to detect clinically significant amounts of vaginal fluid DNA for organisms associated with BV, TV, and CV. Women's self-diagnostic accuracy was 56% for BV/TV and 69.2% for CV. False-positives would have led to a self-medication commission error rate of 20.3% for BV/TV and 8% for CV. Potential self-medication omission error rates due to false-negatives were 23.7% for BV/TV and 24.8% for CV. The positive predictive value of diagnostic algorithms ranged from 0% to 78.1% for BV/TV and 41.7% for CV. The algorithms were based on clinical diagnostic standards. The nonspecific nature of vaginal symptoms, mixed infections, and a faulty device intended to measure vaginal pH and amines explain why none of the algorithms reached the goal of 95% accuracy. The next prototype of the WMSD kit will not include nonspecific vaginal signs and symptoms in favor of recently available point-of-care devices that identify antigens or enzymes of the causative BV, TV, and CV organisms.
Audit of litigation against the accident and emergency radiology department.
Cantoni, S; De Stefano, F; Mari, A; Savaia, F; Rosso, R; Derchi, L
2009-09-01
The aims of this study were to reduce and monitor litigation due to failure to diagnose a fracture, to evaluate whether the cases were due to radiological error or other problems in the diagnostic and therapeutic management of patients and to identify organisational, technical or functional changes or guidelines to improve the management of patients with suspected fracture and their expectations. We analysed the litigation database for the period 2004-2006 and extracted all episodes indicating failure to diagnose a fracture at the accident and emergency radiology department of our centre. The radiographs underwent blinded review by two experts, and each case was jointly analysed by a radiologist and a forensic physician to see what led to the compensation claim. We identified 22 events (2004 seven cases; 2005 eight cases; 2006 seven cases). Six cases were unrelated to radiological error. Six were due to imperceptible fractures at the time of the examination. These were accounted for by the presence of a major lesion distracting the examiner's attention from a less important associated lesion in one case, a false negative result in a patient examined on a incompletely radiolucent spinal board and underexposure of the coccyx region in an obese patient. Six cases were related to an interpretation error by the radiologist. In the remaining cases, the lesion being referred to in the compensation claim could either not be established or the case was closed by the insurance company without compensation. Corrective measures were adopted. These included planning the purchase of a higher performance device, drawing up a protocol for imaging patients on spinal boards, reminding radiologists of the need to carefully scrutinise the entire radiogram even after having identified a lesion, and producing an information sheet explaining to patients the possibility of false negative results in cases of imperceptible lesions and inviting them to return to the department if symptoms persist. We believe the clinical and administrative analysis we performed is useful. It reviewed some administrative practices and identified critical features. We identified tools that we trust will reduce litigation.
Intraoperative analysis of sentinel lymph nodes by imprint cytology for cancer of the breast.
Shiver, Stephen A; Creager, Andrew J; Geisinger, Kim; Perrier, Nancy D; Shen, Perry; Levine, Edward A
2002-11-01
The utilization of lymphatic mapping techniques for breast carcinoma has made intraoperative evaluation of sentinel lymph nodes (SLN) attractive, because axillary lymph node dissection can be performed during the initial surgery if the SLN is positive. The optimal technique for rapid SLN assessment has not been determined. Both frozen sectioning and imprint cytology are used for rapid intraoperative SLN evaluation. A retrospective review of the intraoperative imprint cytology results of 133 SLN mapping procedures from 132 breast carcinoma patients was performed. SLN were evaluated intraoperatively by bisecting the lymph node and making imprints of each cut surface. Imprints were stained with hematoxylin and eosin (H&E) and Diff-Quik. Permanent sections were evaluated with up to four H&E stained levels and cytokeratin immunohistochemistry. Imprint cytology results were compared with final histologic results. Sensitivity and specificity of imprint cytology were 56% and 100%, respectively, producing a 100% positive predictive value and 88% negative predictive value. Imprint cytology was significantly more sensitive for macrometastasis than micrometastasis 87% versus 22% (P = 0.00007). Of 13 total false negatives, 11 were found to be due to sampling error and 2 due to errors in intraoperative interpretation. Both intraoperative interpretation errors involved a diagnosis of lobular breast carcinoma. The sensitivity and specificity of imprint cytology are similar to that of frozen section evaluation. Imprint cytology is therefore a viable alternative to frozen sectioning when intraoperative evaluation is required. If SLN micrometastasis is used to determine the need for further lymphadenectomy, more sensitive intraoperative methods will be needed to avoid a second operation.
NASA Astrophysics Data System (ADS)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-01
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiological conditions.
Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A
2016-03-01
Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation. Copyright © 2016 Elsevier B.V. All rights reserved.
Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J
2010-08-01
Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.
Comparison of the accuracy rates of 3-T and 1.5-T MRI of the knee in the diagnosis of meniscal tear.
Grossman, Jeffrey W; De Smet, Arthur A; Shinki, Kazuhiko
2009-08-01
The purpose of this study was to compare the accuracy of 3-T MRI with that of 1.5-T MRI of the knee in the diagnosis of meniscal tear and to analyze the causes of diagnostic error. We reviewed the medical records and original MRI interpretations of 100 consecutive patients who underwent 3-T MRI of the knee and of 100 consecutive patients who underwent 1.5-T MRI of the knee to determine the accuracy of diagnoses of meniscal tear. Knee arthroscopy was the reference standard. We retrospectively reviewed all MRI diagnostic errors to determine the cause of the errors. At arthroscopy, 109 medial and 77 lateral meniscal tears were identified in the 200 patients. With two abnormal MR images indicating a meniscal tear, the sensitivity and specificity for medial tear were 92.7% and 82.2% at 1.5-T MRI and 92.6% and 76.1% at 3-T MRI (p = 1.0, p = 0.61). The sensitivity and specificity for lateral tears were 68.4% and 95.2% at 1.5-T MRI and 69.2% and 91.8% at 3-T MRI (p = 1.0, p = 0.49). Of the false-positive diagnoses of medial meniscal tear, five of eight at 1.5 T and seven of 11 at 3 T were apparent peripheral longitudinal tears of the posterior horn. Fifteen of the 26 missed medial and lateral meniscal tears were not seen in retrospect even with knowledge of the tear type and location. Allowing for sample size limitations, we found comparable accuracy of 3-T and 1.5-T MRI of the knee in the diagnosis of meniscal tear. The causes of false-positive and false-negative MRI diagnoses of meniscal tear are similar for 3-T and 1.5-T MRI.
The false-negative rate of sentinel node biopsy in patients with breast cancer: a meta-analysis
Pesek, Sarah; Ashikaga, Taka; Krag, Lars Erik; Krag, David
2012-01-01
Background/Purpose In sentinel node surgery for breast cancer, procedural accuracy is assessed by calculating the false-negative rate. It is important to measure this since there are potential adverse outcomes from missing node metastases. We performed a meta-analysis of published data to assess which method has achieved the lowest false-negative rate. Methods We found 3588 articles concerning sentinel nodes and breast cancer published from 1993 through mid-2011; 183 articles met our inclusion criteria. The studies described in these 183 articles included a total of 9306 patients. We grouped the studies by injection material and injection location. The false-negative rates were analyzed according to these groupings and also by the year in which the articles were published. Results There was significant variation in the false-negative rate over time with a trend to higher rates over time. There was significant variation related to injection material. The use of blue dye alone was associated with the highest false-negative rate. Inclusion of a radioactive tracer along with blue dye resulted in a significantly lower false-negative rate. Although there were variations in the false-negative rate according to injection location, none were significant. This meta-analysis also indicates a significant change over time in the false-negative rate. Discussion/Conclusions The use of blue dye should be accompanied by a radioactive tracer to achieve a significantly lower false-negative rate. Location of injection did not have a significant impact on the false-negative rate. Given the limitations of acquiring appropriate data, the false-negative rate should not be used as a metric for training or quality control. PMID:22569745
Source localization (LORETA) of the error-related-negativity (ERN/Ne) and positivity (Pe).
Herrmann, Martin J; Römmler, Josefine; Ehlis, Ann-Christine; Heidrich, Anke; Fallgatter, Andreas J
2004-07-01
We investigated error processing of 39 subjects engaging the Eriksen flanker task. In all 39 subjects a pronounced negative deflection (ERN/Ne) and a later positive component (Pe) were observed after incorrect as compared to correct responses. The neural sources of both components were analyzed using LORETA source localization. For the negative component (ERN/Ne) we found significantly higher brain electrical activity in medial prefrontal areas for incorrect responses, whereas the positive component (Pe) was localized nearby but more rostral within the anterior cingulate cortex (ACC). Thus, different neural generators were found for the ERN/Ne and the Pe, which further supports the notion that both error-related components represent different aspects of error processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Peter R., E-mail: pmarti46@uwo.ca; Cool, Derek W.; Romagnoli, Cesare
2014-07-15
Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiologymore » resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was consistently greater when using spherical tumor shapes as opposed to no shape assumption. However, an assumption of spherical tumor shape for RMSE = 3.5 mm led to a mean overestimation of tumor sampling probabilities of 3%, implying that assuming spherical tumor shape may be reasonable for many prostate tumors. The authors also determined that a biopsy system would need to have a RMS needle delivery error of no more than 1.6 mm in order to sample 95% of tumors with one core. The authors’ experiments also indicated that the effect of axial-direction error on the measured tumor burden was mitigated by the 18 mm core length at 3.5 mm RMSE. Conclusions: For biopsy systems with RMSE ≥ 3.5 mm, more than one biopsy core must be taken from the majority of tumors to achieveP ≥ 95%. These observations support the authors’ perspective that some tumors of clinically significant sizes may require more than one biopsy attempt in order to be sampled during the first biopsy session. This motivates the authors’ ongoing development of an approach to optimize biopsy plans with the aim of achieving a desired probability of obtaining a sample from each tumor, while minimizing the number of biopsies. Optimized planning of within-tumor targets for MRI-3D TRUS fusion biopsy could support earlier diagnosis of prostate cancer while it remains localized to the gland and curable.« less
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Dental Students' Interpretations of Digital Panoramic Radiographs on Completely Edentate Patients.
Kratz, Richard J; Nguyen, Caroline T; Walton, Joanne N; MacDonald, David
2018-03-01
The ability of dental students to interpret digital panoramic radiographs (PANs) of edentulous patients has not been documented. The aim of this retrospective study was to compare the ability of second-year (D2) dental students with that of third- and fourth-year (D3-D4) dental students to interpret and identify positional errors in digital PANs obtained from patients with complete edentulism. A total of 169 digital PANs from edentulous patients were assessed by D2 (n=84) and D3-D4 (n=85) dental students at one Canadian dental school. The correctness of the students' interpretations was determined by comparison to a gold standard established by assessments of the same PANs by two experts (a graduate student in prosthodontics and an oral and maxillofacial radiologist). Data collected were from September 1, 2006, when digital radiography was implemented at the university, to December 31, 2012. Nearly all (95%) of the PANs were acceptable diagnostically despite a high proportion (92%) of positional errors detected. A total of 301 positional errors were identified in the sample. The D2 students identified significantly more (p=0.002) positional errors than the D3-D4 students. There was no significant difference (p=0.059) in the distribution of radiographic interpretation errors between the two student groups when compared to the gold standard. Overall, the category of extragnathic findings had the highest number of false negatives (43) reported. In this study, dental students interpreted digital PANs of edentulous patients satisfactorily, but they were more adept at identifying radiographic findings compared to positional errors. Students should be reminded to examine the entire radiograph thoroughly to ensure extragnathic findings are not missed and to recognize and report patient positional errors.
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-01-01
Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-04-01
Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-01-01
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data (‘jumping to conclusions’, JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. PMID:24958065
Detection, prevention, and rehabilitation of amblyopia.
Spiritus, M
1997-10-01
The necessity of visual preschool screening for reducing the prevalence of amblyopia is widely accepted. The beneficial results of large-scale screening programs conducted in Scandinavia are reported. Screening monocular visual acuity at 3.5 to 4 years of age appears to be an excellent basis for detecting and treating amblyopia and an acceptable compromise between the pitfalls encountered in screening younger children and the cost-to-benefit ratio. In this respect, several preschoolers' visual acuity charts have been evaluated. New recently developed small-target random stereotests and binocular suppression tests have also been developed with the aim of correcting the many false negatives (anisometropic amblyopia or bilateral high ametropia) induced by the usual stereotests. Longitudinal studies demonstrate that correction of high refractive errors decreases the risk of amblyopia and does not impede emmetropization. The validity of various photoscreening and videoscreening procedures for detecting refractive errors in infants prior to the onset of strabismus or amblyopia, as well as alternatives to conventional occlusion therapy, is discussed.
Improved detection of radioactive material using a series of measurements
NASA Astrophysics Data System (ADS)
Mann, Jenelle
The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.
Can false memories be corrected by feedback in the DRM paradigm?
McConnell, Melissa D; Hunt, R Reed
2007-07-01
Normal processes of comprehension frequently yield false memories as an unwanted by-product. The simple paradigm now known as the Deese/Roediger-McDermott (DRM) paradigm takes advantage of this fact and has been used to reliably produce false memory for laboratory study. Among the findings from past research is the difficulty of preventing false memories in this paradigm. The purpose of the present experiments was to examine the effectiveness of feedback in correcting false memories. Two experiments were conducted, in which participants recalled DRM lists and either received feedback on their performance or did not. A subsequent recall test was administered to assess the effect of feedback. The results showed promising effects of feedback: Feedback enhanced both error correction and the propagation of correct recall. The data replicated other data of studies that have shown substantial error perseveration following feedback. These data also provide new information on the occurrence of errors following feedback. The results are discussed in terms of the activation-monitoring theory of false memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Assessing environmental DNA detection in controlled lentic systems.
Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin
2014-01-01
Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).
E/N effects on K0 values revealed by high precision measurements under low field conditions
NASA Astrophysics Data System (ADS)
Hauck, Brian C.; Siems, William F.; Harden, Charles S.; McHugh, Vincent M.; Hill, Herbert H.
2016-07-01
Ion mobility spectrometry (IMS) is used to detect chemical warfare agents, explosives, and narcotics. While IMS has a low rate of false positives, their occurrence causes the loss of time and money as the alarm is verified. Because numerous variables affect the reduced mobility (K0) of an ion, wide detection windows are required in order to ensure a low false negative response rate. Wide detection windows, however, reduce response selectivity, and interferents with similar K0 values may be mistaken for targeted compounds and trigger a false positive alarm. Detection windows could be narrowed if reference K0 values were accurately known for specific instrumental conditions. Unfortunately, there is a lack of confidence in the literature values due to discrepancies in the reported K0 values and their lack of reported error. This creates the need for the accurate control and measurement of each variable affecting ion mobility, as well as for a central accurate IMS database for reference and calibration. A new ion mobility spectrometer has been built that reduces the error of measurements affecting K0 by an order of magnitude less than ±0.2%. Precise measurements of ±0.002 cm2 V-1 s-1 or better have been produced and, as a result, an unexpected relationship between K0 and the electric field to number density ratio (E/N) has been discovered in which the K0 values of ions decreased as a function of E/N along a second degree polynomial trend line towards an apparent asymptote at approximately 4 Td.
NASA Astrophysics Data System (ADS)
Ha, Minsu; Nehm, Ross H.
2016-06-01
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.
2018-01-01
Objectives To quality assure a Trusted Third Party linked data set to prepare it for analysis. Setting Birth registration and notification records from the Office for National Statistics for all births in England 2005–2014 linked to Maternity Hospital Episode Statistics (HES) delivery records by NHS Digital using mothers’ identifiers. Participants All 6 676 912 births that occurred in England from 1 January 2005 to 31 December 2014. Primary and secondary outcome measures Every link between a registered birth and an HES delivery record for the study period was categorised as either the same baby or a different baby to the same mother, or as a wrong link, by comparing common baby data items and valid values in key fields with stepwise deterministic rules. Rates of preserved and discarded links were calculated and which features were more common in each group were assessed. Results Ninety-eight per cent of births originally linked to HES were left with one preserved link. The majority of discarded links were due to duplicate HES delivery records. Of the 4854 discarded links categorised as wrong links, clerical checks found 85% were false-positives links, 13% were quality assurance false negatives and 2% were undeterminable. Births linked using a less reliable stage of the linkage algorithm, births at home and in the London region, and with birth weight or gestational age values missing in HES were more likely to have all links discarded. Conclusions Linkage error, data quality issues, and false negatives in the quality assurance procedure were uncovered. The procedure could be improved by allowing for transposition in date fields, and more discrimination between missing and differing values. The availability of identifiers in the datasets supported clerical checking. Other research using Trusted Third Party linkage should not assume the linked dataset is error-free or optimised for their analysis, and allow sufficient resources for this. PMID:29500200
ERIC Educational Resources Information Center
Lyons, Kristen E.; Ghetti, Simona; Cornoldi, Cesare
2010-01-01
Using a new method for studying the development of false-memory formation, we examined developmental differences in the rates at which 6-, 7-, 9-, 10-, and 18-year-olds made two types of memory errors: backward causal-inference errors (i.e. falsely remembering having viewed the non-viewed cause of a previously viewed effect), and gap-filling…
Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven, Blaire; Hesse, Cedar; Soghigian, John
The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less
Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive
Roy, Mononita; Molnar, Frank
2013-01-01
Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828
Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant
Steven, Blaire; Hesse, Cedar; Soghigian, John; ...
2017-03-31
The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less
A Closer Look at Self-Reported Suicide Attempts: False Positives and False Negatives
ERIC Educational Resources Information Center
Ploderl, Martin; Kralovec, Karl; Yazdi, Kurosch; Fartacek, Reinhold
2011-01-01
The validity of self-reported suicide attempt information is undermined by false positives (e.g., incidences without intent to die), or by unreported suicide attempts, referred to as false negatives. In a sample of 1,385 Austrian adults, we explored the occurrence of false positives and false negatives with detailed, probing questions. Removing…
Koita, Ousmane A; Doumbo, Ogobara K; Ouattara, Amed; Tall, Lalla K; Konaré, Aoua; Diakité, Mahamadou; Diallo, Mouctar; Sagara, Issaka; Masinde, Godfred L; Doumbo, Safiatou N; Dolo, Amagana; Tounkara, Anatole; Traoré, Issa; Krogstad, Donald J
2012-02-01
We identified 480 persons with positive thick smears for asexual Plasmodium falciparum parasites, of whom 454 had positive rapid diagnostic tests (RDTs) for the histidine-rich protein 2 (HRP2) product of the hrp2 gene and 26 had negative tests. Polymerase chain reaction (PCR) amplification for the histidine-rich repeat region of that gene was negative in one-half (10/22) of false-negative specimens available, consistent with spontaneous deletion. False-negative RDTs were found only in persons with asymptomatic infections, and multiplicities of infection (MOIs) were lower in persons with false-negative RDTs (both P < 0.001). These results show that parasites that fail to produce HRP2 can cause patent bloodstream infections and false-negative RDT results. The importance of these observations is likely to increase as malaria control improves, because lower MOIs are associated with false-negative RDTs and false-negative RDTs are more frequent in persons with asymptomatic infections. These findings suggest that the use of HRP2-based RDTs should be reconsidered.
Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng
2011-12-01
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
False negative rates in Drosophila cell-based RNAi screens: a case study
2011-01-01
Background High-throughput screening using RNAi is a powerful gene discovery method but is often complicated by false positive and false negative results. Whereas false positive results associated with RNAi reagents has been a matter of extensive study, the issue of false negatives has received less attention. Results We performed a meta-analysis of several genome-wide, cell-based Drosophila RNAi screens, together with a more focused RNAi screen, and conclude that the rate of false negative results is at least 8%. Further, we demonstrate how knowledge of the cell transcriptome can be used to resolve ambiguous results and how the number of false negative results can be reduced by using multiple, independently-tested RNAi reagents per gene. Conclusions RNAi reagents that target the same gene do not always yield consistent results due to false positives and weak or ineffective reagents. False positive results can be partially minimized by filtering with transcriptome data. RNAi libraries with multiple reagents per gene also reduce false positive and false negative outcomes when inconsistent results are disambiguated carefully. PMID:21251254
Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.
Fetterman, Adam K; Robinson, Michael D
2011-02-01
Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
Bultena, Sybrine; Danielmeier, Claudia; Bekkering, Harold; Lemhöfer, Kristin
2017-01-01
Humans monitor their behavior to optimize performance, which presumably relies on stable representations of correct responses. During second language (L2) learning, however, stable representations have yet to be formed while knowledge of the first language (L1) can interfere with learning, which in some cases results in persistent errors. In order to examine how correct L2 representations are stabilized, this study examined performance monitoring in the learning process of second language learners for a feature that conflicts with their first language. Using EEG, we investigated if L2 learners in a feedback-guided word gender assignment task showed signs of error detection in the form of an error-related negativity (ERN) before and after receiving feedback, and how feedback is processed. The results indicated that initially, response-locked negativities for correct (CRN) and incorrect (ERN) responses were of similar size, showing a lack of internal error detection when L2 representations are unstable. As behavioral performance improved following feedback, the ERN became larger than the CRN, pointing to the first signs of successful error detection. Additionally, we observed a second negativity following the ERN/CRN components, the amplitude of which followed a similar pattern as the previous negativities. Feedback-locked data indicated robust FRN and P300 effects in response to negative feedback across different rounds, demonstrating that feedback remained important in order to update memory representations during learning. We thus show that initially, L2 representations may often not be stable enough to warrant successful error monitoring, but can be stabilized through repeated feedback, which means that the brain is able to overcome L1 interference, and can learn to detect errors internally after a short training session. The results contribute a different perspective to the discussion on changes in ERN and FRN components in relation to learning, by extending the investigation of these effects to the language learning domain. Furthermore, these findings provide a further characterization of the online learning process of L2 learners.
ERIC Educational Resources Information Center
Greyson, Bruce
2005-01-01
Some persons who claim to have had near-death experiences (NDEs) fail research criteria for having had NDEs ("false positives"); others who deny having had NDEs do meet research criteria for having had NDEs ("false negatives"). The author evaluated false positive claims and false negative denials in an organization that promotes near-death…
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y
2013-08-29
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.
Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2011-01-01
Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…
Pavlovich, Matthew J; Dunn, Emily E; Hall, Adam B
2016-05-15
Commercial spices represent an emerging class of fuels for improvised explosives. Being able to classify such spices not only by type but also by brand would represent an important step in developing methods to analytically investigate these explosive compositions. Therefore, a combined ambient mass spectrometric/chemometric approach was developed to quickly and accurately classify commercial spices by brand. Direct analysis in real time mass spectrometry (DART-MS) was used to generate mass spectra for samples of black pepper, cayenne pepper, and turmeric, along with four different brands of cinnamon, all dissolved in methanol. Unsupervised learning techniques showed that the cinnamon samples clustered according to brand. Then, we used supervised machine learning algorithms to build chemometric models with a known training set and classified the brands of an unknown testing set of cinnamon samples. Ten independent runs of five-fold cross-validation showed that the training set error for the best-performing models (i.e., the linear discriminant and neural network models) was lower than 2%. The false-positive percentages for these models were 3% or lower, and the false-negative percentages were lower than 10%. In particular, the linear discriminant model perfectly classified the testing set with 0% error. Repeated iterations of training and testing gave similar results, demonstrating the reproducibility of these models. Chemometric models were able to classify the DART mass spectra of commercial cinnamon samples according to brand, with high specificity and low classification error. This method could easily be generalized to other classes of spices, and it could be applied to authenticating questioned commercial samples of spices or to examining evidence from improvised explosives. Copyright © 2016 John Wiley & Sons, Ltd.
MacIntyre, Hugh L; Cullen, John J
2016-08-01
Regulations for ballast water treatment specify limits on the concentrations of living cells in discharge water. The vital stains fluorescein diacetate (FDA) and 5-chloromethylfluorescein diacetate (CMFDA) in combination have been recommended for use in verification of ballast water treatment technology. We tested the effectiveness of FDA and CMFDA, singly and in combination, in discriminating between living and heat-killed populations of 24 species of phytoplankton from seven divisions, verifying with quantitative growth assays that uniformly live and dead populations were compared. The diagnostic signal, per-cell fluorescence intensity, was measured by flow cytometry and alternate discriminatory thresholds were defined statistically from the frequency distributions of the dead or living cells. Species were clustered by staining patterns: for four species, the staining of live versus dead cells was distinct, and live-dead classification was essentially error free. But overlap between the frequency distributions of living and heat-killed cells in the other taxa led to unavoidable errors, well in excess of 20% in many. In 4 very weakly staining taxa, the mean fluorescence intensity in the heat-killed cells was higher than that of the living cells, which is inconsistent with the assumptions of the method. Applying the criteria of ≤5% false negative plus ≤5% false positive errors, and no significant loss of cells due to staining, FDA and FDA+CMFDA gave acceptably accurate results for only 8-10 of 24 species (i.e., 33%-42%). CMFDA was the least effective stain and its addition to FDA did not improve the performance of FDA alone. © 2016 The Authors. Journal of Phycology published by Wiley Periodicals, Inc. on behalf of Phycological Society of America.
A new pooling strategy for high-throughput screening: the Shifted Transversal Design
Thierry-Mieg, Nicolas
2006-01-01
Background In binary high-throughput screening projects where the goal is the identification of low-frequency events, beyond the obvious issue of efficiency, false positives and false negatives are a major concern. Pooling constitutes a natural solution: it reduces the number of tests, while providing critical duplication of the individual experiments, thereby correcting for experimental noise. The main difficulty consists in designing the pools in a manner that is both efficient and robust: few pools should be necessary to correct the errors and identify the positives, yet the experiment should not be too vulnerable to biological shakiness. For example, some information should still be obtained even if there are slightly more positives or errors than expected. This is known as the group testing problem, or pooling problem. Results In this paper, we present a new non-adaptive combinatorial pooling design: the "shifted transversal design" (STD). It relies on arithmetics, and rests on two intuitive ideas: minimizing the co-occurrence of objects, and constructing pools of constant-sized intersections. We prove that it allows unambiguous decoding of noisy experimental observations. This design is highly flexible, and can be tailored to function robustly in a wide range of experimental settings (i.e., numbers of objects, fractions of positives, and expected error-rates). Furthermore, we show that our design compares favorably, in terms of efficiency, to the previously described non-adaptive combinatorial pooling designs. Conclusion This method is currently being validated by field-testing in the context of yeast-two-hybrid interactome mapping, in collaboration with Marc Vidal's lab at the Dana Farber Cancer Institute. Many similar projects could benefit from using the Shifted Transversal Design. PMID:16423300
Turtle: identifying frequent k-mers with cache-efficient algorithms.
Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander
2014-07-15
Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gertler, Maximilian; Czogiel, Irina; Stark, Klaus; Wilking, Hendrik
2017-01-01
Poor recall during investigations of foodborne outbreaks may lead to misclassifications in exposure ascertainment. We conducted a simulation study to assess the frequency and determinants of recall errors. Lunch visitors in a cafeteria using exclusively cashless payment reported their consumption of 13 food servings available daily in the three preceding weeks using a self-administered paper-questionnaire. We validated this information using electronic payment information. We calculated associated factors on misclassification of recall according to time, age, sex, education level, dietary habits and type of servings. We included 145/226 (64%) respondents who reported 27,095 consumed food items. Sensitivity of recall was 73%, specificity 96%. In multivariable analysis, for each additional day of recall period, the adjusted chance for false-negative recall increased by 8% (OR: 1.1;95%-CI: 1.06, 1.1), for false-positive recall by 3% (OR: 1.03;95%-CI: 1.02, 1.05), for indecisive recall by 12% (OR: 1.1;95%-CI: 1.08, 1.15). Sex and education-level had minor effects. Forgetting to report consumed foods is more frequent than reporting food-items actually not consumed. Bad recall is strongly enhanced by delay of interviews and may make hypothesis generation and testing very challenging. Side dishes are more easily missed than main courses. If available, electronic payment data can improve food-history information.
Liu, Jin-Ya; Chen, Li-Da; Cai, Hua-Song; Liang, Jin-Yu; Xu, Ming; Huang, Yang; Li, Wei; Feng, Shi-Ting; Xie, Xiao-Yan; Lu, Ming-De; Wang, Wei
2016-01-01
AIM: To present our initial experience regarding the feasibility of ultrasound virtual endoscopy (USVE) and its measurement reliability for polyp detection in an in vitro study using pig intestine specimens. METHODS: Six porcine intestine specimens containing 30 synthetic polyps underwent USVE, computed tomography colonography (CTC) and optical colonoscopy (OC) for polyp detection. The polyp measurement defined as the maximum polyp diameter on two-dimensional (2D) multiplanar reformatted (MPR) planes was obtained by USVE, and the absolute measurement error was analyzed using the direct measurement as the reference standard. RESULTS: USVE detected 29 (96.7%) of 30 polyps, remaining a 7-mm one missed. There was one false-positive finding. Twenty-six (89.7%) of 29 reconstructed images were clearly depicted, while 29 (96.7%) of 30 polyps were displayed on CTC with one false-negative finding. In OC, all the polyps were detected. The intraclass correlation coefficient was 0.876 (95%CI: 0.745-0.940) for measurements obtained with USVE. The pooled absolute measurement errors ± the standard deviations of the depicted polyps with actual sizes ≤ 5 mm, 6-9 mm, and ≥ 10 mm were 1.9 ± 0.8 mm, 0.9 ± 1.2 mm, and 1.0 ± 1.4 mm, respectively. CONCLUSION: USVE is reliable for polyp detection and measurement in in vitro study. PMID:27022217
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-10-30
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data ('jumping to conclusions', JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-05
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
The use of source memory to identify one's own episodic confusion errors.
Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R
2001-03-01
In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.
ERIC Educational Resources Information Center
Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.
2012-01-01
Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…
Over-Distribution in Source Memory
Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.
2012-01-01
Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning
Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.
2009-01-01
Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093
Working memory affects false memory production for emotional events.
Mirandola, Chiara; Toffalini, Enrico; Ciriello, Alfonso; Cornoldi, Cesare
2017-01-01
Whereas a link between working memory (WM) and memory distortions has been demonstrated, its influence on emotional false memories is unclear. In two experiments, a verbal WM task and a false memory paradigm for negative, positive or neutral events were employed. In Experiment 1, we investigated individual differences in verbal WM and found that the interaction between valence and WM predicted false recognition, with negative and positive material protecting high WM individuals against false remembering; the beneficial effect of negative material disappeared in low WM participants. In Experiment 2, we lowered the WM capacity of half of the participants with a double task request, which led to an overall increase in false memories; furthermore, consistent with Experiment 1, the increase in negative false memories was larger than that of neutral or positive ones. It is concluded that WM plays a critical role in determining false memory production, specifically influencing the processing of negative material.
Comparing source-based and gist-based false recognition in aging and Alzheimer's disease.
Pierce, Benton H; Sullivan, Alison L; Schacter, Daniel L; Budson, Andrew E
2005-07-01
This study examined 2 factors contributing to false recognition of semantic associates: errors based on confusion of source and errors based on general similarity information or gist. The authors investigated these errors in patients with Alzheimer's disease (AD), age-matched control participants, and younger adults, focusing on each group's ability to use recollection of source information to suppress false recognition. The authors used a paradigm consisting of both deep and shallow incidental encoding tasks, followed by study of a series of categorized lists in which several typical exemplars were omitted. Results showed that healthy older adults were able to use recollection from the deep processing task to some extent but less than that used by younger adults. In contrast, false recognition in AD patients actually increased following the deep processing task, suggesting that they were unable to use recollection to oppose familiarity arising from incidental presentation. (c) 2005 APA, all rights reserved.
42 CFR 1005.23 - Harmless error.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 5 2012-10-01 2012-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...
42 CFR 1005.23 - Harmless error.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 5 2014-10-01 2014-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...
42 CFR 1005.23 - Harmless error.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...
42 CFR 1005.23 - Harmless error.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 5 2013-10-01 2013-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...
42 CFR 1005.23 - Harmless error.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...
42 CFR 3.552 - Harmless error.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Harmless error. 3.552 Section 3.552 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS PATIENT SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT Enforcement Program § 3.552 Harmless error. No error in either the...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
Yeo, Zhen Xuan; Wong, Joshua Chee Leong; Rozen, Steven G; Lee, Ann Siew Gek
2014-06-24
The Ion Torrent PGM is a popular benchtop sequencer that shows promise in replacing conventional Sanger sequencing as the gold standard for mutation detection. Despite the PGM's reported high accuracy in calling single nucleotide variations, it tends to generate many false positive calls in detecting insertions and deletions (indels), which may hinder its utility for clinical genetic testing. Recently, the proprietary analytical workflow for the Ion Torrent sequencer, Torrent Suite (TS), underwent a series of upgrades. We evaluated three major upgrades of TS by calling indels in the BRCA1 and BRCA2 genes. Our analysis revealed that false negative indels could be generated by TS under both default calling parameters and parameters adjusted for maximum sensitivity. However, indel calling with the same data using the open source variant callers, GATK and SAMtools showed that false negatives could be minimised with the use of appropriate bioinformatics analysis. Furthermore, we identified two variant calling measures, Quality-by-Depth (QD) and VARiation of the Width of gaps and inserts (VARW), which substantially reduced false positive indels, including non-homopolymer associated errors without compromising sensitivity. In our best case scenario that involved the TMAP aligner and SAMtools, we achieved 100% sensitivity, 99.99% specificity and 29% False Discovery Rate (FDR) in indel calling from all 23 samples, which is a good performance for mutation screening using PGM. New versions of TS, BWA and GATK have shown improvements in indel calling sensitivity and specificity over their older counterpart. However, the variant caller of TS exhibits a lower sensitivity than GATK and SAMtools. Our findings demonstrate that although indel calling from PGM sequences may appear to be noisy at first glance, proper computational indel calling analysis is able to maximize both the sensitivity and specificity at the single base level, paving the way for the usage of this technology for future clinical genetic testing.
The effect of image quality and forensic expertise in facial image comparisons.
Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice
2015-03-01
Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. © 2014 American Academy of Forensic Sciences.
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
Alterations in Error-Related Brain Activity and Post-Error Behavior over Time
ERIC Educational Resources Information Center
Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward
2012-01-01
This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…
[Risk and risk management in aviation].
Müller, Manfred
2004-10-01
RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Storbeck, Justin
2013-01-01
I investigated whether negative affective states enhance encoding of and memory for item-specific information reducing false memories. Positive, negative, and neutral moods were induced, and participants then completed a Deese-Roediger-McDermott (DRM) false-memory task. List items were presented in unique spatial locations or unique fonts to serve as measures for item-specific encoding. The negative mood conditions had more accurate memories for item-specific information, and they also had fewer false memories. The final experiment used a manipulation that drew attention to distinctive information, which aided learning for DRM words, but also promoted item-specific encoding. For the condition that promoted item-specific encoding, false memories were reduced for positive and neutral mood conditions to a rate similar to that of the negative mood condition. These experiments demonstrated that negative affective cues promote item-specific processing reducing false memories. People in positive and negative moods encode events differently creating different memories for the same event.
Wormwood, Jolie Baumann; Lynn, Spencer K; Feldman Barrett, Lisa; Quigley, Karen S
2016-01-01
We examined how the Boston Marathon bombings affected threat perception in the Boston community. In a threat perception task, participants attempted to "shoot" armed targets and avoid shooting unarmed targets. Participants viewing images of the bombings accompanied by affectively negative music and text (e.g., "Terror Strikes Boston") made more false alarms (i.e., more errors "shooting" unarmed targets) compared to participants viewing the same images accompanied by affectively positive music and text (e.g., "Boston Strong") and participants who did not view bombing images. This difference appears to be driven by decreased sensitivity (i.e., decreased ability to distinguish guns from non-guns) as opposed to a more liberal bias (i.e., favouring the "shoot" response). Additionally, the more strongly affected the participant was by the bombings, the more their sensitivity was reduced in the negatively framed condition, suggesting that this framing was particularly detrimental to the most vulnerable individuals in the affected community.
Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.
Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong
2016-01-01
This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.
Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?
ERIC Educational Resources Information Center
Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.
2007-01-01
This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Tavares, Suelene B N; Alves de Sousa, Nadja L; Manrique, Edna J C; Pinheiro de Albuquerque, Zair B; Zeferino, Luiz C; Amaral, Rita G
2008-06-25
Rapid prescreening (RPS) is an internal quality-control (IQC) method that is used both to reduce errors in the laboratory and to measure the sensitivity of routine screening (RS). Little direct comparison data are available comparing RPS with other more widely used IQC methods. The authors compared the performance of RPS, 10% random review of negative smears (R-10%), and directed rescreening of negative smears based on clinical risk criteria (RCRC) over 1 year in a community clinic setting. In total, 6,135 smears were evaluated. The sensitivity of RS alone was 71.3%. RPS detected significantly more (132 cases) false-negative (FN) cases than either R-10% (7 cases) or RCRC (32 cases). RPS significantly improved the overall sensitivity of the laboratory (71.3-92.2%; P = .001); neither R-10% nor RCRC significantly changed the sensitivity of RS. RPS was not as specific as the other methods, although nearly 68% of all abnormalities detected by RPS were verified as real. RPS of 100% of smears required the same amount of time as RCRC but required twice as much time as R-10%. The current results demonstrated that RPS is a much more effective IQC method than either R-10% or RCRC. RPS detects significantly more errors and can improve the overall sensitivity of a laboratory with either a modest increase or no increase in overall time spent on IQC. R-10% is an insensitive IQC method, and neither R-10% nor RCRC can significantly improve the overall sensitivity of a laboratory. (c) 2008 American Cancer Society.
Alfsen, G Cecilie; Chen, Ying; Kähler, Hanne; Bukholm, Ida Rashida Khan
2016-12-01
The Norwegian System of Patient Injury Compensation (NPE) processes compensation claims from patients who complain about malpractice in the health services. A wrong diagnosis in pathology may cause serious injury to the patient, but the incidence of compensation claims is unknown, because pathology is not specified as a separate category in NPE’s statistics. Knowledge about errors is required to assess quality-enhancing measures. We have therefore searched through the NPE records to identify cases whose background stems from errors committed in pathology departments and laboratories. We have searched through the NPE records for cases related to pathology for the years 2010 – 2015. During this period the NPE processed a total of 26 600 cases, of which 93 were related to pathology. The compensation claim was upheld in 66 cases, resulting in total compensation payments amounting to NOK 63 million. False-negative results in the form of undetected diagnoses were the most frequent grounds for compensation claims (63 cases), with an undetected malignant melanoma (n = 23) or atypia in cell samples from the cervix uteri (n = 16) as the major groups. Sixteen cases involved non-diagnostic issues such as mix-up of samples (n = 8), contamination of samples (n = 4) or delayed responses (n = 4). The number of compensation claims caused by errors in pathology diagnostics is low in relative terms. The errors may, however, be of a serious nature, especially if malignant conditions are overlooked or samples mixed up.
Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J
2013-06-01
This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y.
2013-01-01
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548
ERIC Educational Resources Information Center
Mazur, Elizabeth; Wolchik, Sharlene A.; Virdin, Lynn; Sandler, Irwin N.; West, Stephen G.
1999-01-01
Examined whether children's cognitive biases moderated impact of stressful divorce-related events on adjustment in 9- to 12-year olds. Found that endorsing negative cognitive errors for hypothetical divorce events moderated relations between stressful divorce events and self- and maternal-reports of internalizing and externalizing symptoms for…
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Performance monitoring and error significance in patients with obsessive-compulsive disorder.
Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert
2010-05-01
Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.
de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro
2004-01-01
In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523
Kim, Yoonsang; Huang, Jidong; Emery, Sherry
2016-02-26
Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on--accurate human coding and full data collection--and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. We developed and applied a search filter to retrieve e-cigarette-related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette-related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms.
Blaser, Simon; Diem, Hanspeter; von Felten, Andreas; Gueuning, Morgan; Andreou, Michael; Boonham, Neil; Tomlinson, Jennifer; Müller, Pie; Utzinger, Jürg; Frey, Jürg E; Bühlmann, Andreas
2018-06-01
Rapid genetic on-site identification methods at points of entry, such as seaports and airports, have the potential to become important tools to prevent the introduction and spread of economically harmful pest species that are unintentionally transported by the global trade of plant commodities. This paper reports the development and evaluation of a loop-mediated isothermal amplification (LAMP)-based identification system to prevent introduction of the three most frequently encountered regulated quarantine insect species groups at Swiss borders, Bemisia tabaci, Thrips palmi and several regulated fruit flies of the genera Bactrocera and Zeugodacus. The LAMP primers were designed to target a fragment of the mitochondrial cytochrome c oxidase subunit I gene and were generated based on publicly available DNA sequences. Laboratory evaluations analysing 282 insect specimens suspected to be quarantine organisms revealed an overall test efficiency of 99%. Additional on-site evaluation at a point of entry using 37 specimens performed by plant health inspectors with minimal laboratory training resulted in an overall test efficiency of 95%. During both evaluation rounds, there were no false-positives and the observed false-negatives were attributable to human-induced manipulation errors. To overcome the possibility of accidental introduction of pests as a result of rare false-negative results, samples yielding negative results in the LAMP method were also subjected to DNA barcoding. Our LAMP assays reliably differentiated between the tested regulated and non-regulated insect species within <1 h. Hence, LAMP assays represent suitable tools for rapid on-site identification of harmful pests, which might facilitate an accelerated import control process for plant commodities. © 2018 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2018 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Maltha, Jessica; Gillet, Philippe; Heutmekers, Marloes; Bottieau, Emmanuel; Van Gompel, Alfons; Jacobs, Jan
2013-01-01
In the past malaria rapid diagnostic tests (RDTs) for self-diagnosis by travelers were considered suboptimal due to poor performance. Nowadays RDTs for self-diagnosis are marketed and available through the internet. The present study assessed RDT products marketed for self-diagnosis for diagnostic accuracy and quality of labeling, content and instructions for use (IFU). Diagnostic accuracy of eight RDT products was assessed with a panel of stored whole blood samples comprising the four Plasmodium species (n = 90) as well as Plasmodium negative samples (n = 10). IFUs were assessed for quality of description of procedure and interpretation and for lay-out and readability level. Errors in packaging and content were recorded. Two products gave false-positive test lines in 70% and 80% of Plasmodium negative samples, precluding their use. Of the remaining products, 4/6 had good to excellent sensitivity for the diagnosis of Plasmodium falciparum (98.2%-100.0%) and Plasmodium vivax (93.3%-100.0%). Sensitivity for Plasmodium ovale and Plasmodium malariae diagnosis was poor (6.7%-80.0%). All but one product yielded false-positive test lines after reading beyond the recommended reading time. Problems with labeling (not specifying target antigens (n = 3), and content (desiccant with no humidity indicator (n = 6)) were observed. IFUs had major shortcomings in description of test procedure and interpretation, poor readability and lay-out and user-unfriendly typography. Strategic issues (e.g. the need for repeat testing and reasons for false-negative tests) were not addressed in any of the IFUs. Diagnostic accuracy of RDTs for self-diagnosis was variable, with only 4/8 RDT products being reliable for the diagnosis of P. falciparum and P. vivax, and none for P. ovale and P. malariae. RDTs for self-diagnosis need improvements in IFUs (content and user-friendliness), labeling and content before they can be considered for self-diagnosis by the traveler.
Maltha, Jessica; Gillet, Philippe; Heutmekers, Marloes; Bottieau, Emmanuel; Van Gompel, Alfons; Jacobs, Jan
2013-01-01
Introduction In the past malaria rapid diagnostic tests (RDTs) for self-diagnosis by travelers were considered suboptimal due to poor performance. Nowadays RDTs for self-diagnosis are marketed and available through the internet. The present study assessed RDT products marketed for self-diagnosis for diagnostic accuracy and quality of labeling, content and instructions for use (IFU). Methods Diagnostic accuracy of eight RDT products was assessed with a panel of stored whole blood samples comprising the four Plasmodium species (n = 90) as well as Plasmodium negative samples (n = 10). IFUs were assessed for quality of description of procedure and interpretation and for lay-out and readability level. Errors in packaging and content were recorded. Results Two products gave false-positive test lines in 70% and 80% of Plasmodium negative samples, precluding their use. Of the remaining products, 4/6 had good to excellent sensitivity for the diagnosis of Plasmodium falciparum (98.2%–100.0%) and Plasmodium vivax (93.3%–100.0%). Sensitivity for Plasmodium ovale and Plasmodium malariae diagnosis was poor (6.7%–80.0%). All but one product yielded false-positive test lines after reading beyond the recommended reading time. Problems with labeling (not specifying target antigens (n = 3), and content (desiccant with no humidity indicator (n = 6)) were observed. IFUs had major shortcomings in description of test procedure and interpretation, poor readability and lay-out and user-unfriendly typography. Strategic issues (e.g. the need for repeat testing and reasons for false-negative tests) were not addressed in any of the IFUs. Conclusion Diagnostic accuracy of RDTs for self-diagnosis was variable, with only 4/8 RDT products being reliable for the diagnosis of P. falciparum and P. vivax, and none for P. ovale and P. malariae. RDTs for self-diagnosis need improvements in IFUs (content and user-friendliness), labeling and content before they can be considered for self-diagnosis by the traveler. PMID:23301027
Ingersoll, Christopher G.; Haverland, Pamela S.; Brunson, Eric L.; Canfield, Timothy J.; Dwyer, F. James; Henke, Chris; Kemble, Nile E.; Mount, David R.; Fox, Richard G.
1996-01-01
Procedures are described for calculating and evaluating sediment effect concentrations (SECs) using laboratory data on the toxicity of contaminants associated with field-collected sediment to the amphipod Hyalella azteca and the midge Chironomus riparius. SECs are defined as the concentrations of individual contaminants in sediment below which toxicity is rarely observed and above which toxicity is frequently observed. The objective of the present study was to develop SECs to classify toxicity data for Great Lake sediment samples tested with Hyalella azteca and Chironomus riparius. This SEC database included samples from additional sites across the United States in order to make the database as robust as possible. Three types of SECs were calculated from these data: (1) Effect Range Low (ERL) and Effect Range Median (ERM), (2) Threshold Effect Level (TEL) and Probable Effect Level (PEL), and (3) No Effect Concentration (NEC). We were able to calculate SECs primarily for total metals, simultaneously extracted metals, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs). The ranges of concentrations in sediment were too narrow in our database to adequately evaluate SECs for butyltins, methyl mercury, polychlorinated dioxins and furans, or chlorinated pesticides. About 60 to 80% of the sediment samples in the database are correctly classified as toxic or not toxic depending on type of SEC evaluated. ERMs and ERLs are generally as reliable as paired PELs and TELs at classifying both toxic and non-toxic samples in our database. Reliability of the SECs in terms of correctly classifying sediment samples is similar between ERMs and NECs; however, ERMs minimize Type I error (false positives) relative to ERLs and minimize Type II error (false negatives) relative to NECs. Correct classification of samples can be improved by using only the most reliable individual SECs for chemicals (i.e., those with a higher percentage of correct classification). SECs calculated using sediment concentrations normalized to total organic carbon (TOC) concentrations did not improve the reliability compared to SECs calculated using dry-weight concentrations. The range of TOC concentrations in our database was relatively narrow compared to the ranges of contaminant concentrations. Therefore, normalizing dry-weight concentrations to a relatively narrow range of TOC concentrations had little influence on relative concentra of contaminants among samples. When SECs are used to conduct a preliminary screening to predict the potential for toxicity in the absence of actual toxicity testing, a low number of SEC exceedances should be used to minimize the potential for false negatives; however, the risk of accepting higher false positives is increased.
Performance evaluation for screening laboratories of the Asia-Pacific region.
Hannon, W Harry
2003-01-01
The Centers for Disease Control and Prevention (CDC) has a long history of involvement in quality assurance (QA) activities for support of newborn screening laboratories. Since 1978, CDC's Newborn Screening Quality Assurance Program (NSQAP), has distributed dried-blood spot (DBS) materials for external QA and has maintained related projects to serve newborn screening laboratories. The first DBS materials were distributed for congenital hypothyroidism screening in 1978 and by 2001, NSQAP had expanded to over 30 disorders and performance monitoring for all filter paper production lots from approved commercial sources. In 2001, there were 250 active NSQAP participants, 167 laboratories from 45 countries and 83 laboratories in the United States. Of these laboratories, 31 are from the Asia Pacific Region representing nine countries primarily for two disorders. In 1999, US laboratories had more errors for Performance Evaluation (PE) specimens than other laboratories; but in 2000, US laboratories had fewer errors. International laboratories reported 0.3% false-negative PE clinical assessments for congenital hypothyroidism and 0.5% for phenylketonuria (0.5%) in 2000. Paperless PE data-reporting operation using an Internet website has recently been implemented.
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
ERIC Educational Resources Information Center
Flouri, Eirini; Panourgia, Constantina
2011-01-01
The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…
ERIC Educational Resources Information Center
Inzlicht, Michael; Al-Khindi, Timour
2012-01-01
Performance monitoring in the anterior cingulate cortex (ACC) has largely been viewed as a cognitive, computational process devoid of emotion. A growing body of research, however, suggests that performance is moderated by motivational engagement and that a signal generated by the ACC, the error-related negativity (ERN), may partially reflect a…
ERIC Educational Resources Information Center
Clawson, Ann; South, Mikle; Baldwin, Scott A.; Larson, Michael J.
2017-01-01
We examined the error-related negativity (ERN) as an endophenotype of ASD by comparing the ERN in families of ASD probands to control families. We hypothesized that ASD probands and families would display reduced-amplitude ERN relative to controls. Participants included 148 individuals within 39 families consisting of a mother, father, sibling,…
Negative Input for Grammatical Errors: Effects after a Lag of 12 Weeks
ERIC Educational Resources Information Center
Saxton, Matthew; Backley, Phillip; Gallaway, Clare
2005-01-01
Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework…
Sommargren, Gary E.; Campbell, Eugene W.
2004-03-09
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Sommargren, Gary E.; Campbell, Eugene W.
2005-06-21
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P
2017-09-01
Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.
Analyzing False Positives of Four Questions in the Force Concept Inventory
ERIC Educational Resources Information Center
Yasuda, Jun-ichro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-01-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a…
Blackmore, C Craig; Terasawa, Teruhiko
2006-02-01
Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.
Prospect theory does not describe the feedback-related negativity value function.
Sambrook, Thomas D; Roser, Matthew; Goslin, Jeremy
2012-12-01
Humans handle uncertainty poorly. Prospect theory accounts for this with a value function in which possible losses are overweighted compared to possible gains, and the marginal utility of rewards decreases with size. fMRI studies have explored the neural basis of this value function. A separate body of research claims that prediction errors are calculated by midbrain dopamine neurons. We investigated whether the prospect theoretic effects shown in behavioral and fMRI studies were present in midbrain prediction error coding by using the feedback-related negativity, an ERP component believed to reflect midbrain prediction errors. Participants' stated satisfaction with outcomes followed prospect theory but their feedback-related negativity did not, instead showing no effect of marginal utility and greater sensitivity to potential gains than losses. Copyright © 2012 Society for Psychophysiological Research.
Valence and the development of immediate and long-term false memory illusions.
Howe, Mark L; Candel, Ingrid; Otgaar, Henry; Malone, Catherine; Wimmer, Marina C
2010-01-01
Across five experiments we examined the role of valence in children's and adults' true and false memories. Using the Deese/Roediger-McDermott paradigm and either neutral or negative-emotional lists, both adults' (Experiment 1) and children's (Experiment 2) true recall and recognition was better for neutral than negative items, and although false recall was also higher for neutral items, false recognition was higher for negative items. The last three experiments examined adults' (Experiment 3) and children's (Experiments 4 and 5) 1-week long-term recognition of neutral and negative-emotional information. The results replicated the immediate recall and recognition findings from the first two experiments. More important, these experiments showed that although true recognition decreased over the 1-week interval, false recognition of neutral items remained unchanged whereas false recognition of negative-emotional items increased. These findings are discussed in terms of theories of emotion and memory as well as their forensic implications.
Rath, S; Panda, M; Sahu, M C; Padhy, R N
2015-09-01
Quantitatively, conventional methods of diagnosis of tinea capitis or paediatric ringworm, microscopic and culture tests were evaluated with Bayes rule. This analysis would help in quantifying the pervasive errors in each diagnostic method, particularly the microscopic method, as a long-term treatment would be involved to eradicate the infection by the use of a particular antifungal chemotherapy. Secondly, the analysis of clinical data would help in obtaining digitally the fallible standard of the microscopic test method, as the culture test method is taken as gold standard. Test results of 51 paediatric patients were of 4 categories: 21 samples were true positive (both tests positive), and 13 were true negative; the rest samples comprised both 14 false positive (microscopic test positivity with culture test negativity) and 3 false negative (microscopic test negativity with culture test positivity) samples. The prevalence of tinea infection was 47.01% in the population of 51 children. The microscopic test of a sample was efficient by 87.5%, in arriving at a positive result on diagnosis, when its culture test was positive; and, this test was efficient by 76.4%, in arriving at a negative result, when its culture test was negative. But, the post-test probability value of a sample with both microscopic and culture tests would be correct in distinguishing a sample from a sick or a healthy child with a chance of 71.5%. However, since the sensitivity of the analysis is 87.5%, the microscopic test positivity would be easier to detect in the presence of infection. In conclusion, it could be stated that Trychophyton rubrum was the most prevalent species; sensitivity and specificity of treating the infection, by antifungal therapy before ascertaining by the culture method remain as 0.8751 and 0.7642, respectively. A correct/coveted diagnostic method of fungal infection would be could be achieved by modern molecular methods (matrix-assisted laser desorption ionisation-time of flight mass spectrometry or fluorescence in situ hybridization or enzyme-linked immunosorbent assay [ELISA] or restriction fragment length polymorphism or DNA/RNA probes of known fungal taxa) in advanced laboratories. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Foley, Mary Ann; Bays, Rebecca Brooke; Foy, Jeffrey; Woodfield, Mila
2015-01-01
In three experiments, we examine the extent to which participants' memory errors are affected by the perceptual features of an encoding series and imagery generation processes. Perceptual features were examined by manipulating the features associated with individual items as well as the relationships among items. An encoding instruction manipulation was included to examine the effects of explicit requests to generate images. In all three experiments, participants falsely claimed to have seen pictures of items presented as words, committing picture misattribution errors. These misattribution errors were exaggerated when the perceptual resemblance between pictures and images was relatively high (Experiment 1) and when explicit requests to generate images were omitted from encoding instructions (Experiments 1 and 2). When perceptual cues made the thematic relationships among items salient, the level and pattern of misattribution errors were also affected (Experiments 2 and 3). Results address alternative views about the nature of internal representations resulting in misattribution errors and refute the idea that these errors reflect only participants' general impressions or beliefs about what was seen.
Aged-related Neural Changes during Memory Conjunction Errors
Giovanello, Kelly S.; Kensinger, Elizabeth A.; Wong, Alana T.; Schacter, Daniel L.
2013-01-01
Human behavioral studies demonstrate that healthy aging is often accompanied by increases in memory distortions or errors. Here we used event-related functional MRI to examine the neural basis of age-related memory distortions. We utilized the memory conjunction error paradigm, a laboratory procedure known to elicit high levels of memory errors. For older adults, right parahippocampal gyrus showed significantly greater activity during false than during accurate retrieval. We observed no regions in which activity was greater during false than during accurate retrieval for young adults. Young adults, however, showed significantly greater activity than old adults during accurate retrieval in right hippocampus. By contrast, older adults demonstrated greater activity than young adults during accurate retrieval in right inferior and middle prefrontal cortex. These data are consistent with the notion that age-related memory conjunction errors arise from dysfunction of hippocampal system mechanisms, rather than impairments in frontally-mediated monitoring processes. PMID:19445606
De Oliveira, Gildasio S; Rahmani, Rod; Fitzgerald, Paul C; Chang, Ray; McCarthy, Robert J
2013-04-01
Poor supervision of physician trainees can be detrimental not only to resident education but also to patient care and safety. Inadequate supervision has been associated with more frequent deaths of patients under the care of junior residents. We hypothesized that residents reporting more medical errors would also report lower quality of supervision scores than the ones with lower reported medical errors. The primary objective of this study was to evaluate the association between the frequency of medical errors reported by residents and their perceived quality of faculty supervision. A cross-sectional nationwide survey was sent to 1000 residents randomly selected from anesthesiology training departments across the United States. Residents from 122 residency programs were invited to participate, the median (interquartile range) per institution was 7 (4-11). Participants were asked to complete a survey assessing demography, perceived quality of faculty supervision, and perceived causes of inadequate perceived supervision. Responses to the statements "I perform procedures for which I am not properly trained," "I make mistakes that have negative consequences for the patient," and "I have made a medication error (drug or incorrect dose) in the last year" were used to assess error rates. Average supervision scores were determined using the De Oliveira Filho et al. scale and compared among the frequency of self-reported error categories using the Kruskal-Wallis test. Six hundred four residents responded to the survey (60.4%). Forty-five (7.5%) of the respondents reported performing procedures for which they were not properly trained, 24 (4%) reported having made mistakes with negative consequences to patients, and 16 (3%) reported medication errors in the last year having occurred multiple times or often. Supervision scores were inversely correlated with the frequency of reported errors for all 3 questions evaluating errors. At a cutoff value of 3, supervision scores demonstrated an overall accuracy (area under the curve) (99% confidence interval) of 0.81 (0.73-0.86), 0.89 (0.77-0.95), and 0.93 (0.77-0.98) for predicting a response of multiple times or often to the question of performing procedures for which they were not properly trained, reported mistakes with negative consequences to patients, and reported medication errors in the last year, respectively. Anesthesiology trainees who reported a greater incidence of medical errors with negative consequences to patients and drug errors also reported lower scores for supervision by faculty. Our findings suggest that further studies of the association between supervision and patient safety are warranted. (Anesth Analg 2013;116:892-7).
Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits
Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.
2016-01-01
Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170
Buzzell, George A; Troller-Renfree, Sonya V; Barker, Tyson V; Bowman, Lindsay C; Chronis-Tuscano, Andrea; Henderson, Heather A; Kagan, Jerome; Pine, Daniel S; Fox, Nathan A
2017-12-01
Behavioral inhibition (BI) is a temperament identified in early childhood that is a risk factor for later social anxiety. However, mechanisms underlying the development of social anxiety remain unclear. To better understand the emergence of social anxiety, longitudinal studies investigating changes at behavioral neural levels are needed. BI was assessed in the laboratory at 2 and 3 years of age (N = 268). Children returned at 12 years, and an electroencephalogram was recorded while children performed a flanker task under 2 conditions: once while believing they were being observed by peers and once while not being observed. This methodology isolated changes in error monitoring (error-related negativity) and behavior (post-error reaction time slowing) as a function of social context. At 12 years, current social anxiety symptoms and lifetime diagnoses of social anxiety were obtained. Childhood BI prospectively predicted social-specific error-related negativity increases and social anxiety symptoms in adolescence; these symptoms directly related to clinical diagnoses. Serial mediation analysis showed that social error-related negativity changes explained relations between BI and social anxiety symptoms (n = 107) and diagnosis (n = 92), but only insofar as social context also led to increased post-error reaction time slowing (a measure of error preoccupation); this model was not significantly related to generalized anxiety. Results extend prior work on socially induced changes in error monitoring and error preoccupation. These measures could index a neurobehavioral mechanism linking BI to adolescent social anxiety symptoms and diagnosis. This mechanism could relate more strongly to social than to generalized anxiety in the peri-adolescent period. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. All rights reserved.
Flouri, Eirini; Panourgia, Constantina
2011-06-01
The aim of this study was to test for gender differences in how negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the association between adverse life events and adolescents' emotional and behavioural problems (measured with the Strengths and Difficulties Questionnaire). The sample consisted of 202 boys and 227 girls (aged 11-15 years) from three state secondary schools in disadvantaged areas in one county in the South East of England. Control variables were age, ethnicity, special educational needs, exclusion history, family structure, family socio-economic disadvantage, and verbal cognitive ability. Adverse life events were measured with Tiet et al.'s (1998) Adverse Life Events Scale. For both genders, we assumed a pathway from adverse life events to emotional and behavioural problems via cognitive errors. We found no gender differences in life adversity, cognitive errors, total difficulties, peer problems, or hyperactivity. In both boys and girls, even after adjustment for controls, cognitive errors were related to total difficulties and emotional symptoms, and life adversity was related to total difficulties and conduct problems. The life adversity/conduct problems association was not explained by negative cognitive errors in either gender. However, we found gender differences in how adversity and cognitive errors produced hyperactivity and internalizing problems. In particular, life adversity was not related, after adjustment for controls, to hyperactivity in girls and to peer problems and emotional symptoms in boys. Cognitive errors fully mediated the effect of life adversity on hyperactivity in boys and on peer and emotional problems in girls.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
NASA Astrophysics Data System (ADS)
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-01
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Effects of learning climate and registered nurse staffing on medication errors.
Chang, Yunkyung; Mark, Barbara
2011-01-01
Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.
Joch, Michael; Hegele, Mathias; Maurer, Heiko; Müller, Hermann; Maurer, Lisa Katharina
2017-07-01
The error (related) negativity (Ne/ERN) is an event-related potential in the electroencephalogram (EEG) correlating with error processing. Its conditions of appearance before terminal external error information suggest that the Ne/ERN is indicative of predictive processes in the evaluation of errors. The aim of the present study was to specifically examine the Ne/ERN in a complex motor task and to particularly rule out other explaining sources of the Ne/ERN aside from error prediction processes. To this end, we focused on the dependency of the Ne/ERN on visual monitoring about the action outcome after movement termination but before result feedback (action effect monitoring). Participants performed a semi-virtual throwing task by using a manipulandum to throw a virtual ball displayed on a computer screen to hit a target object. Visual feedback about the ball flying to the target was masked to prevent action effect monitoring. Participants received a static feedback about the action outcome (850 ms) after each trial. We found a significant negative deflection in the average EEG curves of the error trials peaking at ~250 ms after ball release, i.e., before error feedback. Furthermore, this Ne/ERN signal did not depend on visual ball-flight monitoring after release. We conclude that the Ne/ERN has the potential to indicate error prediction in motor tasks and that it exists even in the absence of action effect monitoring. NEW & NOTEWORTHY In this study, we are separating different kinds of possible contributors to an electroencephalogram (EEG) error correlate (Ne/ERN) in a throwing task. We tested the influence of action effect monitoring on the Ne/ERN amplitude in the EEG. We used a task that allows us to restrict movement correction and action effect monitoring and to control the onset of result feedback. We ascribe the Ne/ERN to predictive error processing where a conscious feeling of failure is not a prerequisite. Copyright © 2017 the American Physiological Society.
Bernatowicz, K; Keall, P; Mishra, P; Knopf, A; Lomax, A; Kipritidis, J
2015-01-01
Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) "conventional" 4D CT that uses a constant imaging and couch-shift frequency, (ii) "beam paused" 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) "respiratory-gated" 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm(3) spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10(-19)). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%-1.4%), false positives (4.0%-2.6%), and false negatives (2.7%-1.3%). These percentage reductions correspond to gating reducing image artifacts by 24-90 cm(3) of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm(3) of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.
Whitson, Bryan A; Groth, Shawn S; Odell, David D; Briones, Eleazar P; Maddaus, Michael A; D'Cunha, Jonathan; Andrade, Rafael S
2013-05-01
Mediastinal staging in patients with non-small cell lung cancer (NSCLC) with endobronchial ultrasound-guided fine-needle aspiration (EBUS-FNA) requires a high negative predictive value (NPV) (ie, low false negative rate). We provide a conservative calculation of NPV that calls for caution in the interpretation of EBUS results. We retrospectively analyzed our prospectively gathered database (January 2007 to November 2011) to include NSCLC patients who underwent EBUS-FNA for mediastinal staging. We excluded patients with metastatic NSCLC and other malignancies. We assessed FNAs with rapid on-site evaluation (ROSE). The calculation of NPV is NPV = true negatives/true negatives + false negatives. However, this definition ignores nondiagnostic samples. Nondiagnostic samples should be added to the NPV denominator because decisions based on nondiagnostic samples could be flawed. We conservatively calculated NPV for EBUS-FNA as NPV = true negatives/true negatives + false negatives + nondiagnostic. We defined false negatives as negative FNAs but NSCLC-positive surgical biopsy of the same site. Nondiagnostic FNAs were nonrepresentative of lymphoid tissue. We compared diagnostic performance with the inclusion and exclusion of nondiagnostic procedures. We studied 120 patients with NSCLC who underwent EBUS-FNA; 5 patients had false negative findings and 10 additional patients had nondiagnostic results. The NPV with and without inclusion of nondiagnostic samples was 65.9% and 85.3%, respectively. The inclusion of nondiagnostic specimens into the conservative, worst-case-scenario calculation of NPV for EBUS-FNA in NSCLC lowers the NPV from 85.3% to 65.9%. The true NPV is likely higher than 65.9% as few nondiagnostic specimens are false negatives. Caution is imperative for the safe application of EBUS-FNA in NSCLC staging. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Muthukumar, Alagarraju; Alatoom, Adnan; Burns, Susan; Ashmore, Jerry; Kim, Anne; Emerson, Brian; Bannister, Edward; Ansari, M Qasim
2015-01-01
To assess the false-positive and false-negative rates of a 4th-generation human immunodeficiency virus (HIV) assay, the Abbott ARCHITECT, vs 2 HIV 3rd-generation assays, the Siemens Centaur and the Ortho-Clinical Diagnostics Vitros. We examined 123 patient specimens. In the first phase of the study, we compared 99 specimens that had a positive screening result via the 3rd-generation Vitros assay (10 positive, 82 negative, and 7 indeterminate via confirmatory immunofluorescent assay [IFA]/Western blot [WB] testing). In the second phase, we assessed 24 HIV-1 RNA-positive (positive result via the nuclear acid amplification test [NAAT] and negative/indeterminate results via the WB test) specimens harboring acute HIV infection. The 4th-generation ARCHITECT assay yielded fewer false-positive results (n = 2) than the 3rd-generation Centaur (n = 9; P = .02) and Vitros (n = 82; P <.001) assays. One confirmed positive case had a false-negative result via the Centaur assay. When specimens from the 24 patients with acute HIV-1 infection were tested, the ARCHITECT assay yielded fewer false-negative results (n = 5) than the Centaur (n = 10) (P = .13) and the other 3rd-generation tests (n = 16) (P = .002). This study indicates that the 4th-generation ARCHITECT HIV assay yields fewer false-positive and false-negative results than the 3rd-generation HIV assays we tested. Copyright© by the American Society for Clinical Pathology (ASCP).
High false-negative rate of anti-HCV among Egyptian patients on regular hemodialysis.
El-Sherif, Assem; Elbahrawy, Ashraf; Aboelfotoh, Atef; Abdelkarim, Magdy; Saied Mohammad, Abdel-Gawad; Abdallah, Abdallah Mahmoud; Mostafa, Sadek; Elmestikawy, Amr; Elwassief, Ahmed; Salah, Mohamed; Abdelbaseer, Mohamed Ali; Abdelwahab, Kouka Saadeldin
2012-07-01
Routine serological testing for hepatitis C virus (HCV) infection among hemodialysis (HD) patients is currently recommended. A dilemma existed on the value of serology because some investigators reported a high rate of false-negative serologic testing. In this study, we aimed to detect the false-negative rate of anti-HCV among Egyptian HD patients. Seventy-eight HD patients, negative for anti-HCV, anti-HIV, and hepatitis B surface antigen, were tested for HCV RNA by reverse transcriptase polymerase chain reaction (RT-PCR). In the next step, the viral load was quantified by real-time PCR in RT-PCR-positive patients. Risk factors for HCV infection, as well as clinical and biochemical indicators of liver disease, were compared between false-negative and true-negative anti-HCV HD patients. The frequency of false-negative anti-HCV was 17.9%. Frequency of blood transfusion, duration of HD, dialysis at multiple centers, and diabetes mellitus were not identified as risk factors for HCV infection. The frequency of false-negative results had a linear relation to the prevalence of HCV infection in the HD units. Timely identification of HCV within dialysis units is needed in order to lower the risk of HCV spread within the HD units. The high false-negative rate of anti-HCV among HD patients in our study justifies testing of a large scale of patients for precious assessment of effectiveness of nucleic acid amplification technology testing in screening HD patient. © 2012 The Authors. Hemodialysis International © 2012 International Society for Hemodialysis.
A real-time heat strain risk classifier using heart rate and skin temperature.
Buller, Mark J; Latzka, William A; Yokota, Miyo; Tharion, William J; Moran, Daniel S
2008-12-01
Heat injury is a real concern to workers engaged in physically demanding tasks in high heat strain environments. Several real-time physiological monitoring systems exist that can provide indices of heat strain, e.g. physiological strain index (PSI), and provide alerts to medical personnel. However, these systems depend on core temperature measurement using expensive, ingestible thermometer pills. Seeking a better solution, we suggest the use of a model which can identify the probability that individuals are 'at risk' from heat injury using non-invasive measures. The intent is for the system to identify individuals who need monitoring more closely or who should apply heat strain mitigation strategies. We generated a model that can identify 'at risk' (PSI 7.5) workers from measures of heart rate and chest skin temperature. The model was built using data from six previously published exercise studies in which some subjects wore chemical protective equipment. The model has an overall classification error rate of 10% with one false negative error (2.7%), and outperforms an earlier model and a least squares regression model with classification errors of 21% and 14%, respectively. Additionally, the model allows the classification criteria to be adjusted based on the task and acceptable level of risk. We conclude that the model could be a valuable part of a multi-faceted heat strain management system.
Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…
Lahat, Ayelet; Lamm, Connie; Chronis-Tuscano, Andrea; Pine, Daniel S.; Henderson, Heather A.; Fox, Nathan A.
2014-01-01
Objective Behavioral inhibition (BI) is an early childhood temperament characterized by fearful responses to novelty and avoidance of social interactions. During adolescence, a subset of children with stable childhood BI develop social anxiety disorder and concurrently exhibit increased error monitoring. The current study examines whether increased error monitoring in seven-year-old behaviorally inhibited children prospectively predicts risk for symptoms of social phobia at age 9. Method Two hundred and ninety one children were characterized on BI at 24 and 36 months of age. Children were seen again at 7 years of age, where they performed a Flanker task, and event-related-potential (ERP) indices of response monitoring were generated. At age 9, self- and maternal-report of social phobia symptoms were obtained. Results Children high in BI, compared to those low in BI, displayed increased error monitoring at age 7, as indexed by larger (i.e., more negative) error-related negativity (ERN) amplitudes. Additionally, early BI was related to later childhood social phobia symptoms at age 9 among children with a large difference in amplitude between ERN and correct-response negativity (CRN) at age 7. Conclusions Heightened error monitoring predicts risk for later social phobia symptoms in children with high BI. Research assessing response monitoring in children with BI may refine our understanding of the mechanisms underlying risk for later anxiety disorders and inform prevention efforts. PMID:24655654
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
Assessment of error rates in acoustic monitoring with the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were for song event detection.
Mekonen, Ayehu; Ayele, Yeshi; Berhan, Yifru; Woldeyohannes, Desalegn; Erku, Woldaregay; Sisay, Solomon
2018-01-01
Quality of tuberculosis (TB) microscopy diagnosis is not a guarantee despite implementation of External Quality Assurance (EQA) service in all laboratories of health facilities. Hence, we aimed at evaluating the technical quality and the findings of sputum smear microscopy for acid fast bacilli (AFB) at health centers in Hararge Zone, Oromia Region, Ethiopia. A cross-sectional study was carried out between July 8, 2014 and July 7, 2015.A pre-tested structured questionnaire was used to collect data. Lot Quality Assurance Sampling (LQAS) method was put into practice for collecting all necessary sample slides. Data were analyzed by using SPSS (Statistical Package for Social Sciences) version 20 software. P-value < 0.05 was considered as statistically significant. Of the total55 health center laboratories which had been assessed during the study period, 20 (36.4%) had major technical errors; 13 (23.6%) had 15 false negative results and 17 (30.9%) had 22 false positive results. Moreover, poor specimen quality, smear size, smear thickness, staining and evenness were indicated in 40 (72.7%), 39 (70.9%), 37 (67.3%), 27(49.1%) and 37 (67.3%) of the collected samples, respectively. False negative AFB findings were significantly associated with lack of Internal Quality Control (IQC) measures (AOR (Adjusted Odds Ratio): 2.90 (95% CI (Confidence Interval): 1.25,6.75) and poor staining procedures (AOR: 2.16(95% CI: 1.01, 5.11). The qualities of AFB smear microscopy reading and smearing were low in most of the laboratories of the health centers. Therefore, it is essential to strength EQA program through building the capacity of laboratory professionals.
Niessen, Maurice A J; van der Hoeven, Niels V; van den Born, Bert-Jan H; van Kalken, Coen K; Kraaijenhagen, Roderik A
2014-10-01
Guidelines on home blood pressure measurement (HBPM) recommend taking at least 12 measurements. For screening purposes, however, it is preferred to reduce this number. We therefore derived and validated cut-off values to determine hypertension status after the first duplicate reading of a HBPM series in a web-based worksite health promotion programme. Nine hundred forty-five employees were included in the derivation and 528 in the validation cohort, which was divided into a normal (n = 297) and increased cardiometabolic risk subgroup (n = 231), and a subgroup with a history of hypertension (n = 98). Six duplicate home measurements were collected during three consecutive days. Systolic and diastolic readings at the first duplicate measurement were used as predictors for hypertension in a multivariate logistic model. Cut-off values were determined using receiver operating characteristics analysis. Upper (≥ 150 or ≥ 95 mmHg) and lower limit (<135 and <80 mmHg) cut-off values were derived to confirm or reject presence of hypertension after one duplicate reading. The area under the curve was 0.94 (standard error 0.01, 95% confidence interval 0.93-0.95). In 62.5% of participants, hypertension status was determined, with 1.1% false positive and 4.7% false negatives. Performance was similar in participants with high and low cardiometabolic risk, but worse in participants with a history of hypertension (10.4% false negatives). One duplicate home reading is sufficient to accurately assess hypertension status in 62.5% of participants, leaving 37.5% in which the whole HBPM series needs to be completed. HBPM can thus be reliably used as screening tool for hypertension in a working population. © The Author 2013. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Schmid, Katrina L; Strang, Niall C
2015-11-01
To provide a summary of the classic paper "Differences in the accommodation stimulus response curves of adult myopes and emmetropes" published in Ophthalmic and Physiological Optics in 1998 and to provide an update on the topic of accommodation errors in myopia. The accommodation responses of 33 participants (10 emmetropes, 11 early onset myopes and 12 late onset myopes) aged 18-31 years were measured using the Canon Autoref R-1 free space autorefractor using three methods to vary the accommodation demand: decreasing distance (4 m to 0.25 cm), negative lenses (0 to -4 D at 4 m) and positive lenses (+4 to 0 D at 0.25 m). We observed that the greatest accommodation errors occurred for the negative lens method whereas minimal errors were observed using positive lenses. Adult progressing myopes had greater lags of accommodation than stable myopes at higher demands induced by negative lenses. Progressing myopes had shallower response gradients than the emmetropes and stable myopes; however the reduced gradient was much less than that observed in children using similar methods. This paper has been often cited as evidence that accommodation responses at near may be primarily reduced in adults with progressing myopia and not in stable myopes and/or that challenging accommodation stimuli (negative lenses with monocular viewing) are required to generate larger accommodation errors. As an analogy, animals reared with hyperopic errors develop axial elongation and myopia. Retinal defocus signals are presumably passed to the retinal pigment epithelium and choroid and then ultimately the sclera to modify eye length. A number of lens treatments that act to slow myopia progression may partially work through reducing accommodation errors. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Why does cervical cancer occur in a state-of-the-art screening program?
Castle, Philip E; Kinney, Walter K; Cheung, Li C; Gage, Julia C; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas S; Wentzensen, Nicolas; Befano, Brian; Schussler, John; Katki, Hormuzd A; Schiffman, Mark
2017-09-01
The goal of cervical screening is to detect and treat precancers before some become cancer. We wanted to understand why, despite state-of-the-art methods, cervical cancers occured in relationship to programmatic performance at Kaiser Permanente Northern California (KPNC), where >1,000,000 women aged ≥30years have undergone cervical cancer screening by triennial HPV and cytology cotesting since 2003. We reviewed clinical histories preceding cervical cancer diagnoses to assign "causes" of cancer. We calculated surrogate measures of programmatic effectiveness (precancers/(precancers and cancers)) and diagnostic yield (precancers and cancers per 1000 cotests), overall and by age at cotest (30-39, 40-49, and ≥50years). Cancer was rare and found mainly in a localized (treatable) stage. Of 623 cervical cancers with at least one preceding or concurrent cotest, 360 (57.8%) were judged to be prevalent (diagnosed at a localized stage within one year or regional/distant stage within two years of the first cotest). Non-compliance with recommended screening and management preceded 9.0% of all cancers. False-negative cotests/sampling errors (HPV and cytology negative), false-negative histologic diagnoses, and treatment failures preceded 11.2%, 9.0%, and 4.3%, respectively, of all cancers. There was significant heterogeneity in the causes of cancer by histologic category (p<0.001 for all; p=0.002 excluding prevalent cases). Programmatic effectiveness (95.3%) and diagnostic yield were greater for squamous cell versus adenocarcinoma histology (p<0.0001) and both decreased with older ages (p trend <0.0001). A state-of-the-art intensive screening program results in very few cervical cancers, most of which are detected early by screening. Screening may become less efficient at older ages. Copyright © 2017 Elsevier Inc. All rights reserved.
Neural evidence for description dependent reward processing in the framing effect.
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the "worse than expected" negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to "better than expected" positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect.
Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina
2015-03-01
Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Shiferaw, Melashu Balew; Hailu, Hiwot Amare; Fola, Abebe Alemu; Derebe, Mulatu Melese; Kebede, Aimro Tadese; Kebede, Abayneh Admas; Emiru, Manamnot Agegne; Gelaw, Zelalem Dessie
2015-01-01
Reliable smear microscopy is an important component of Directly Observed Treatment Scheme (DOTS) strategy for TB control program in countries with limited resources. Despite external quality assessment is established in Ethiopia, there is lower TB detection rate (48%) in Amhara region compared to the World Health Organization (WHO) estimate (70%). This highlights the quality of smear microscopy needs to be evaluated. Therefore, the aim of this study was to assess the quality of sputum smear microscopy performance among health center laboratories in West Amhara region, Ethiopia. A cross sectional study was conducted from July 08, 2013 to July 07, 2014. Data were collected from 201 public health center laboratories using a structured questionnaire. Slides were collected based on Lot Quality Assurance Sampling (LQAS) method and rechecked blindly by trained laboratory technologists. The data were entered into EPI info V.7 and smear quality indicators and AFB results were analyzed by SPSS version 20. Among 201 laboratories enrolled in this study, 47 (23.4%) laboratories had major errors. Forty one (20.4%) laboratories had a total of 67 false negative and 29 (14.4%) laboratories had a total of 68 false positive results. Specimen quality, smear thickness and evenness were found poor in 134 (66.7%), 133 (66.2%) and 126 (62.7%) laboratories, respectively. Unavailability of microscope lens cleaning solution (AOR: 2.90; 95% CI: 1.25-6.75; P: 0.013) and dirty smears (AOR: 2.65; 95% CI: 1.14-6.18; P: 0.024) were correlated with false negative results whereas no previous EQA participation (AOR: 3.43; 95% CI: 1. 39-8.45; P: 0.007) was associated with false positive results. The performance of health facilities for sputum smear microscopy was relatively poor in West Amhara region. Hence, strengthening the EQA program and technical support on sputum smear microscopy are recommended to ensure quality tuberculosis diagnostic service.
Evaluation of exome variants using the Ion Proton Platform to sequence error-prone regions.
Seo, Heewon; Park, Yoomi; Min, Byung Joo; Seo, Myung Eui; Kim, Ju Han
2017-01-01
The Ion Proton sequencer from Thermo Fisher accurately determines sequence variants from target regions with a rapid turnaround time at a low cost. However, misleading variant-calling errors can occur. We performed a systematic evaluation and manual curation of read-level alignments for the 675 ultrarare variants reported by the Ion Proton sequencer from 27 whole-exome sequencing data but that are not present in either the 1000 Genomes Project and the Exome Aggregation Consortium. We classified positive variant calls into 393 highly likely false positives, 126 likely false positives, and 156 likely true positives, which comprised 58.2%, 18.7%, and 23.1% of the variants, respectively. We identified four distinct error patterns of variant calling that may be bioinformatically corrected when using different strategies: simplicity region, SNV cluster, peripheral sequence read, and base inversion. Local de novo assembly successfully corrected 201 (38.7%) of the 519 highly likely or likely false positives. We also demonstrate that the two sequencing kits from Thermo Fisher (the Ion PI Sequencing 200 kit V3 and the Ion PI Hi-Q kit) exhibit different error profiles across different error types. A refined calling algorithm with better polymerase may improve the performance of the Ion Proton sequencing platform.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.
Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao
2016-01-01
In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing pharmacological interventions.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Negative Expertise: Comparing Differently Tenured Elder Care Nurses' Negative Knowledge
ERIC Educational Resources Information Center
Gartmeier, Martin; Lehtinen, Erno; Gruber, Hans; Heid, Helmut
2011-01-01
Negative expertise is conceptualised as the professional's ability to avoid errors during practice due to certain cognitive agencies. In this study, negative knowledge (i.e. knowledge about what is wrong in a certain context and situation) is conceptualised as one such agency. This study compares and investigates the negative knowledge of elder…
Motivational state controls the prediction error in Pavlovian appetitive-aversive interactions.
Laurent, Vincent; Balleine, Bernard W; Westbrook, R Frederick
2018-01-01
Contemporary theories of learning emphasize the role of a prediction error signal in driving learning, but the nature of this signal remains hotly debated. Here, we used Pavlovian conditioning in rats to investigate whether primary motivational and emotional states interact to control prediction error. We initially generated cues that positively or negatively predicted an appetitive food outcome. We then assessed how these cues modulated aversive conditioning when a novel cue was paired with a foot shock. We found that a positive predictor of food enhances, whereas a negative predictor of that same food impairs, aversive conditioning. Critically, we also showed that the enhancement produced by the positive predictor is removed by reducing the value of its associated food. In contrast, the impairment triggered by the negative predictor remains insensitive to devaluation of its associated food. These findings provide compelling evidence that the motivational value attributed to a predicted food outcome can directly control appetitive-aversive interactions and, therefore, that motivational processes can modulate emotional processes to generate the final error term on which subsequent learning is based. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2008-01-01
The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…
Analyzing false positives of four questions in the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-06-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.
48 CFR 22.1015 - Discovery of errors by the Department of Labor.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Discovery of errors by the... REGULATION SOCIOECONOMIC PROGRAMS APPLICATION OF LABOR LAWS TO GOVERNMENT ACQUISITIONS Service Contract Act of 1965, as Amended 22.1015 Discovery of errors by the Department of Labor. If the Department of...
12 CFR 205.11 - Procedures for resolving errors.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 2 2011-01-01 2011-01-01 false Procedures for resolving errors. 205.11 Section 205.11 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM ELECTRONIC FUND TRANSFERS (REGULATION E) § 205.11 Procedures for resolving errors. (a) Definition of error—(1...
Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L
2013-01-01
Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.
Laboratory tests for identification or exclusion of heparin induced thrombocytopenia: HIT or miss?
Favaloro, Emmanuel J
2018-02-01
Heparin induced thrombocytopenia (HIT) is a potentially fatal condition that arises subsequent to formation of antibodies against complexes containing heparin, usually platelet-factor 4-heparin ("anti-PF4-heparin"). Assessment for HIT involves both clinical evaluation and, if indicated, laboratory testing for confirmation or exclusion, typically using an initial immunological assay ("screening"), and only if positive, a secondary functional assay for confirmation. Many different immunological and functional assays have been developed. The most common contemporary immunological assays comprise enzyme-linked immunosorbent assay [ELISA], chemiluminescence, lateral flow, and particle gel techniques. The most common functional assays measure platelet aggregation or platelet activation events (e.g., serotonin release assay; heparin-induced platelet activation (HIPA); flow cytometry). All assays have some sensitivity and specificity to HIT antibodies, but differ in terms of relative sensitivity and specificity for pathological HIT, as well as false negative and false positive error rate. This brief article overviews the different available laboratory methods, as well as providing a suggested approach to diagnosis or exclusion of HIT. © 2017 Wiley Periodicals, Inc.
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing
Lefebvre, Germain; Blakemore, Sarah-Jayne
2017-01-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice. PMID:28800597
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.
Palminteri, Stefano; Lefebvre, Germain; Kilford, Emma J; Blakemore, Sarah-Jayne
2017-08-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.
Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg
2012-01-01
The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Discrete emotion-congruent false memories in the DRM paradigm.
Bland, Cassandra E; Howe, Mark L; Knott, Lauren
2016-08-01
Research has shown that false-memory production is enhanced for material that is emotionally congruent with the mood of the participant at the time of encoding. So far this research has only been conducted to examine the influence of generic negative affective mood states and generic negative stimuli on false-memory production. In addition, much of the research is limited as it focuses on valence and arousal dimensions, and fails to take into account the more comprehensive nature of emotions. The current study demonstrates that this effect goes beyond general negative or positive moods and acts at a more discrete emotional level. Participants underwent a standard emotion-induction procedure before listening to negative emotional or neutral associative word lists. The emotions induced, negative word lists, and associated nonpresented critical lures, were related to either fear or anger, 2 negative valence emotions that are also both high in arousal. Results showed that when valence and arousal are controlled for, false memories are more likely to be produced for discrete emotionally congruent compared with incongruent materials. These results support spreading activation theories of false remembering and add to our understanding of the adaptive nature of false-memory production. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Stirling, Paul; Faroug, Radwane; Amanat, Suheil; Ahmed, Abdulkhaled; Armstrong, Malcolm; Sharma, Pankaj; Qamruddin, Ahmed
2014-01-01
We quantify the false-negative diagnostic rate of septic arthritis using Gram-stain microscopy of synovial fluid and compare this to values reported in the peer-reviewed literature. We propose a method of improving the diagnostic value of Gram-stain microscopy using Lithium Heparin containers that prevent synovial fluid coagulation. Retrospective study of the Manchester Royal Infirmary microbiology database of patients undergoing synovial fluid Gram-stain and culture between December 2003 and March 2012 was undertaken. The initial cohort of 1896 synovial fluid analyses for suspected septic arthritis was reduced to 143 after exclusion criteria were applied. Analysis of our Gram-stain microscopy yielded 111 false-negative results from a cohort size of 143 positive synovial fluid cultures, giving a false-negative rate of 78%. We report a false-negative rate of Gram-stain microscopy for septic arthritis of 78%. Clinicians should therefore avoid the investigation until a statistically significant data set confirms its efficacy. The investigation's value could be improved by using Lithium Heparin containers to collect homogenous synovial fluid samples. Ongoing research aims to establish how much this could reduce the false-negative rate.
Tso, Kai-Yuen; Lee, Sau Dan; Lo, Kwok-Wai; Yip, Kevin Y
2014-12-23
Patient-derived tumor xenografts in mice are widely used in cancer research and have become important in developing personalized therapies. When these xenografts are subject to DNA sequencing, the samples could contain various amounts of mouse DNA. It has been unclear how the mouse reads would affect data analyses. We conducted comprehensive simulations to compare three alignment strategies at different mutation rates, read lengths, sequencing error rates, human-mouse mixing ratios and sequenced regions. We also sequenced a nasopharyngeal carcinoma xenograft and a cell line to test how the strategies work on real data. We found the "filtering" and "combined reference" strategies performed better than aligning reads directly to human reference in terms of alignment and variant calling accuracies. The combined reference strategy was particularly good at reducing false negative variants calls without significantly increasing the false positive rate. In some scenarios the performance gain of these two special handling strategies was too small for special handling to be cost-effective, but it was found crucial when false non-synonymous SNVs should be minimized, especially in exome sequencing. Our study systematically analyzes the effects of mouse contamination in the sequencing data of human-in-mouse xenografts. Our findings provide information for designing data analysis pipelines for these data.
Doñamayor, Nuria; Dinani, Jakob; Römisch, Manuel; Ye, Zheng; Münte, Thomas F
2014-10-01
Neural responses to performance errors and external feedback have been suggested to be altered in obsessive-compulsive disorder. In the current study, an associative learning task was used in healthy participants assessed for obsessive-compulsive symptoms by the OCI-R questionnaire. The task included a condition with equivocal feedback that did not inform about the participants' performance. Following incorrect responses, an error-related negativity and an error positivity were observed. In the feedback phase, the largest feedback-related negativity was observed following equivocal feedback. Theta and beta oscillatory components were found following incorrect and correct responses, respectively, and an increase in theta power was associated with negative and equivocal feedback. Changes over time were also explored as an indicator for possible learning effects. Finally, event-related potentials and oscillatory components were found to be uncorrelated with OCI-R scores in the current non-clinical sample. Copyright © 2014 Elsevier B.V. All rights reserved.
Negative input for grammatical errors: effects after a lag of 12 weeks.
Saxton, Matthew; Backley, Phillip; Gallaway, Clare
2005-08-01
Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework offered by Saxton's (1997, 2000) contrast theory of negative input. Corrective input was associated with subsequent improvements in the grammaticality of child speech for three of the target structures. No effects were found for two forms of positive input: non-contingent models, where the adult produces target structures in non-error-contingent contexts; and contingent models, where grammatical forms follow grammatical child usages. The findings lend support to the view that, in some cases at least, the structure of adult-child discourse yields information on the bounds of grammaticality for the language-learning child.
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Yoon, Jung Hyun; Jung, Hae Kyoung; Lee, Jong Tae; Ko, Kyung Hee
2013-09-01
To investigate the factors that have an effect on false-positive or false-negative shear-wave elastography (SWE) results in solid breast masses. From June to December 2012, 222 breast lesions of 199 consecutive women (mean age: 45.3 ± 10.1 years; range, 21 to 88 years) who had been scheduled for biopsy or surgical excision were included. Greyscale ultrasound and SWE were performed in all women before biopsy. Final ultrasound assessments and SWE parameters (pattern classification and maximum elasticity) were recorded and compared with histopathology results. Patient and lesion factors in the 'true' and 'false' groups were compared. Of the 222 masses, 175 (78.8 %) were benign, and 47 (21.2 %) were malignant. False-positive rates of benign masses were significantly higher than false-negative rates of malignancy in SWE patterns, 36.6 % to 6.4 % (P < 0.001). Among both benign and malignant masses, factors showing significance among false SWE features were lesion size, breast thickness and lesion depth (all P < 0.05). All 47 malignant breast masses had SWE images of good quality. False SWE features were more significantly seen in benign masses. Lesion size, breast thickness and lesion depth have significance in producing false results, and this needs consideration in SWE image acquisition. • Shear-wave elastography (SWE) is widely used during breast imaging • At SWE, false-positive rates were significantly higher than false-negative rates • Larger size, breast thickness, depth and fair quality influences false-positive SWE features • Smaller size, larger breast thickness and depth influences false-negative SWE features.
Neural evidence for description dependent reward processing in the framing effect
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the “worse than expected” negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to “better than expected” positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect. PMID:24733998
Lexical and phonological variability in preschool children with speech sound disorder.
Macrae, Toby; Tyler, Ann A; Lewis, Kerry E
2014-02-01
The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.
Acetaminophen attenuates error evaluation in cortex
Kam, Julia W.Y.; Heine, Steven J.; Inzlicht, Michael; Handy, Todd C.
2016-01-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants’ ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual’s Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161
Addressing the Problem of Negative Lexical Transfer Errors in Chilean University Students
ERIC Educational Resources Information Center
Dissington, Paul Anthony
2018-01-01
Studies of second language learning have revealed a connection between first language transfer and errors in second language production. This paper describes an action research study carried out among Chilean university students studying English as part of their degree programmes. The study focuses on common lexical errors made by Chilean…
Cohesive Errors in Writing among ESL Pre-Service Teachers
ERIC Educational Resources Information Center
Kwan, Lisa S. L.; Yunus, Melor Md
2014-01-01
Writing is a complex skill and one of the most difficult to master. A teacher's weak writing skills may negatively influence their students. Therefore, reinforcing teacher education by first determining pre-service teachers' writing weaknesses is imperative. This mixed-methods error analysis study aims to examine the cohesive errors in the writing…
Huang, Jidong; Emery, Sherry
2016-01-01
Background Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. Objective We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. Methods We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on—accurate human coding and full data collection—and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. Results We developed and applied a search filter to retrieve e-cigarette–related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette–related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. Conclusions This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms. PMID:26920122
Tembuyser, Lien; Ligtenberg, Marjolijn J L; Normanno, Nicola; Delen, Sofie; van Krieken, J Han; Dequeker, Elisabeth M C
2014-05-01
Precision medicine is now a key element in clinical oncology. RAS mutational status is a crucial predictor of responsiveness to anti-epidermal growth factor receptor agents in metastatic colorectal cancer. In an effort to guarantee high-quality testing services in molecular pathology, the European Society of Pathology has been organizing an annual KRAS external quality assessment program since 2009. In 2012, 10 formalin-fixed, paraffin-embedded samples, of which 8 from invasive metastatic colorectal cancer tissue and 2 artificial samples of cell line material, were sent to more than 100 laboratories from 26 countries with a request for routine KRAS testing. Both genotyping and clinical reports were assessed independently. Twenty-seven percent of the participants genotyped at least 1 of 10 samples incorrectly. In total, less than 5% of the distributed specimens were genotyped incorrectly. Genotyping errors consisted of false negatives, false positives, and incorrectly genotyped mutations. Twenty percent of the laboratories reported a technical error for one or more samples. A review of the written reports showed that several essential elements were missing, most notably a clinical interpretation of the test result, the method sensitivity, and the use of a reference sequence. External quality assessment serves as a valuable educational tool in assessing and improving molecular testing quality and is an important asset for monitoring quality assurance upon incorporation of new biomarkers in diagnostic services. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan
2018-01-01
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0–20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution. PMID:29642623
Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners
Ulery, Bradford T.; Hicklin, R. Austin; Buscaglia, JoAnn; Roberts, Maria Antonia
2012-01-01
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as “difficult” than for “easy” or “moderate” comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases. PMID:22427888
Localized-atlas-based segmentation of breast MRI in a decision-making framework.
Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin
2017-03-01
Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.
Hu, Bifeng; Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou
2018-04-10
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0-20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution.
Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.
Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F
2018-05-01
To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
Lahat, Ayelet; Lamm, Connie; Chronis-Tuscano, Andrea; Pine, Daniel S; Henderson, Heather A; Fox, Nathan A
2014-04-01
Behavioral inhibition (BI) is an early childhood temperament characterized by fearful responses to novelty and avoidance of social interactions. During adolescence, a subset of children with stable childhood BI develop social anxiety disorder and concurrently exhibit increased error monitoring. The current study examines whether increased error monitoring in 7-year-old, behaviorally inhibited children prospectively predicts risk for symptoms of social phobia at age 9 years. A total of 291 children were characterized on BI at 24 and 36 months of age. Children were seen again at 7 years of age, when they performed a Flanker task, and event-related potential (ERP) indices of response monitoring were generated. At age 9, self- and maternal-report of social phobia symptoms were obtained. Children high in BI, compared to those low in BI, displayed increased error monitoring at age 7, as indexed by larger (i.e., more negative) error-related negativity (ERN) amplitudes. In addition, early BI was related to later childhood social phobia symptoms at age 9 among children with a large difference in amplitude between ERN and correct-response negativity (CRN) at age 7. Heightened error monitoring predicts risk for later social phobia symptoms in children with high BI. Research assessing response monitoring in children with BI may refine our understanding of the mechanisms underlying risk for later anxiety disorders and inform prevention efforts. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. All rights reserved.
The problem of false positives and false negatives in violent video game experiments.
Ferguson, Christopher J
The problem of false positives and negatives has received considerable attention in behavioral research in recent years. The current paper uses video game violence research as an example of how such issues may develop in a field. Despite decades of research, evidence on whether violent video games (VVGs) contribute to aggression in players has remained mixed. Concerns have been raised in recent years that experiments regarding VVGs may suffer from both "false positives" and "false negatives." The current paper examines this issue in three sets of video game experiments, two sets of video game experiments on aggression and prosocial behaviors identified in meta-analysis, and a third group of recent null studies. Results indicated that studies of VVGs and aggression appear to be particularly prone to false positive results. Studies of VVGs and prosocial behavior, by contrast are heterogeneous and did not demonstrate any indication of false positive results. However, their heterogeneous nature made it difficult to base solid conclusions on them. By contrast, evidence for false negatives in null studies was limited, and little evidence emerged that null studies lacked power in comparison those highlighted in past meta-analyses as evidence for effects. These results are considered in light of issues related to false positives and negatives in behavioral science more broadly. Copyright © 2017 Elsevier Ltd. All rights reserved.
van der Meulen, Miriam P; Lansdorp-Vogelaar, Iris; van Heijningen, Else-Mariëtte B; Kuipers, Ernst J; van Ballegooijen, Marjolein
2016-06-01
If some adenomas do not bleed over several years, they will cause systematic false-negative fecal immunochemical test (FIT) results. The long-term effectiveness of FIT screening has been estimated without accounting for such systematic false-negativity. There are now data with which to evaluate this issue. The authors developed one microsimulation model (MISCAN [MIcrosimulation SCreening ANalysis]-Colon) without systematic false-negative FIT results and one model that allowed a percentage of adenomas to be systematically missed in successive FIT screening rounds. Both variants were adjusted to reproduce the first-round findings of the Dutch CORERO FIT screening trial. The authors then compared simulated detection rates in the second screening round with those observed, and adjusted the simulated percentage of systematically missed adenomas to those data. Finally, the authors calculated the impact of systematic false-negative FIT results on the effectiveness of repeated FIT screening. The model without systematic false-negativity simulated higher detection rates in the second screening round than observed. These observed rates could be reproduced when assuming that FIT systematically missed 26% of advanced and 73% of nonadvanced adenomas. To reduce the false-positive rate in the second round to the observed level, the authors also had to assume that 30% of false-positive findings were systematically false-positive. Systematic false-negative FIT testing limits the long-term reduction of biennial FIT screening in the incidence of colorectal cancer (35.6% vs 40.9%) and its mortality (55.2% vs 59.0%) in participants. The results of the current study provide convincing evidence based on the combination of real-life and modeling data that a percentage of adenomas are systematically missed by repeat FIT screening. This impairs the efficacy of FIT screening. Cancer 2016;122:1680-8. © 2016 American Cancer Society. © 2016 American Cancer Society.
Retrieval Failure Contributes to Gist-Based False Recognition
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2011-01-01
People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. PMID:22125357
Flanagan, Emma C; Wong, Stephanie; Dutt, Aparna; Tu, Sicong; Bertoux, Maxime; Irish, Muireann; Piguet, Olivier; Rao, Sulakshana; Hodges, John R; Ghosh, Amitabha; Hornberger, Michael
2016-01-01
Episodic memory recall processes in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) can be similarly impaired, whereas recognition performance is more variable. A potential reason for this variability could be false-positive errors made on recognition trials and whether these errors are due to amnesia per se or a general over-endorsement of recognition items regardless of memory. The current study addressed this issue by analysing recognition performance on the Rey Auditory Verbal Learning Test (RAVLT) in 39 bvFTD, 77 AD and 61 control participants from two centers (India, Australia), as well as disinhibition assessed using the Hayling test. Whereas both AD and bvFTD patients were comparably impaired on delayed recall, bvFTD patients showed intact recognition performance in terms of the number of correct hits. However, both patient groups endorsed significantly more false-positives than controls, and bvFTD and AD patients scored equally poorly on a sensitivity index (correct hits-false-positives). Furthermore, measures of disinhibition were significantly associated with false positives in both groups, with a stronger relationship with false-positives in bvFTD. Voxel-based morphometry analyses revealed similar neural correlates of false positive endorsement across bvFTD and AD, with both patient groups showing involvement of prefrontal and Papez circuitry regions, such as medial temporal and thalamic regions, and a DTI analysis detected an emerging but non-significant trend between false positives and decreased fornix integrity in bvFTD only. These findings suggest that false-positive errors on recognition tests relate to similar mechanisms in bvFTD and AD, reflecting deficits in episodic memory processes and disinhibition. These findings highlight that current memory tests are not sufficient to accurately distinguish between bvFTD and AD patients.
A revision of the gamma-evaluation concept for the comparison of dose distributions.
Bakai, Annemarie; Alber, Markus; Nüsslin, Fridtjof
2003-11-07
A method for the quantitative four-dimensional (4D) evaluation of discrete dose data based on gradient-dependent local acceptance thresholds is presented. The method takes into account the local dose gradients of a reference distribution for critical appraisal of misalignment and collimation errors. These contribute to the maximum tolerable dose error at each evaluation point to which the local dose differences between comparison and reference data are compared. As shown, the presented concept is analogous to the gamma-concept of Low et al (1998a Med. Phys. 25 656-61) if extended to (3+1) dimensions. The pointwise dose comparisons of the reformulated concept are easier to perform and speed up the evaluation process considerably, especially for fine-grid evaluations of 3D dose distributions. The occurrences of false negative indications due to the discrete nature of the data are reduced with the method. The presented method was applied to film-measured, clinical data and compared with gamma-evaluations. 4D and 3D evaluations were performed. Comparisons prove that 4D evaluations have to be given priority, especially if complex treatment situations are verified, e.g., non-coplanar beam configurations.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.
31 CFR 306.55 - Signatures, minor errors and change of name.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Signatures, minor errors and change... GOVERNING U.S. SECURITIES Assignments by or in Behalf of Individuals § 306.55 Signatures, minor errors and change of name. The owner's signature to an assignment should be in the form in which the security is...
12 CFR 205.8 - Change in terms notice; error resolution notice.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 2 2011-01-01 2011-01-01 false Change in terms notice; error resolution notice. 205.8 Section 205.8 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM ELECTRONIC FUND TRANSFERS (REGULATION E) § 205.8 Change in terms notice; error resolution notice...
Detecting genotyping errors and describing black bear movement in northern Idaho
Michael K. Schwartz; Samuel A. Cushman; Kevin S. McKelvey; Jim Hayden; Cory Engkjer
2006-01-01
Non-invasive genetic sampling has become a favored tool to enumerate wildlife. Genetic errors, caused by poor quality samples, can lead to substantial biases in numerical estimates of individuals. We demonstrate how the computer program DROPOUT can detect amplification errors (false alleles and allelic dropout) in a black bear (Ursus americanus) dataset collected in...
31 CFR 306.55 - Signatures, minor errors and change of name.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Signatures, minor errors and change of name. 306.55 Section 306.55 Money and Finance: Treasury Regulations Relating to Money and Finance... GOVERNING U.S. SECURITIES Assignments by or in Behalf of Individuals § 306.55 Signatures, minor errors and...
12 CFR 205.8 - Change in terms notice; error resolution notice.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 2 2010-01-01 2010-01-01 false Change in terms notice; error resolution notice. 205.8 Section 205.8 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM ELECTRONIC FUND TRANSFERS (REGULATION E) § 205.8 Change in terms notice; error resolution notice...
Regulation of error-prone translesion synthesis by Spartan/C1orf124
Kim, Myoung Shin; Machida, Yuka; Vashisht, Ajay A.; Wohlschlegel, James A.; Pang, Yuan-Ping; Machida, Yuichi J.
2013-01-01
Translesion synthesis (TLS) employs low fidelity polymerases to replicate past damaged DNA in a potentially error-prone process. Regulatory mechanisms that prevent TLS-associated mutagenesis are unknown; however, our recent studies suggest that the PCNA-binding protein Spartan plays a role in suppression of damage-induced mutagenesis. Here, we show that Spartan negatively regulates error-prone TLS that is dependent on POLD3, the accessory subunit of the replicative DNA polymerase Pol δ. We demonstrate that the putative zinc metalloprotease domain SprT in Spartan directly interacts with POLD3 and contributes to suppression of damage-induced mutagenesis. Depletion of Spartan induces complex formation of POLD3 with Rev1 and the error-prone TLS polymerase Pol ζ, and elevates mutagenesis that relies on POLD3, Rev1 and Pol ζ. These results suggest that Spartan negatively regulates POLD3 function in Rev1/Pol ζ-dependent TLS, revealing a previously unrecognized regulatory step in error-prone TLS. PMID:23254330
Bruijn, Merel M C; Hermans, Frederik J R; Vis, Jolande Y; Wilms, Femke F; Oudijk, Martijn A; Kwee, Anneke; Porath, Martina M; Oei, Guid; Scheepers, Hubertina C J; Spaanderman, Marc E A; Bloemenkamp, Kitty W M; Haak, Monique C; Bolte, Antoinette C; Vandenbussche, Frank P H A; Woiski, Mallory D; Bax, Caroline J; Cornette, Jérôme M J; Duvekot, Johannes J; Bijvank, Bas W A N I J; van Eyck, Jim; Franssen, Maureen T M; Sollie, Krystyna M; van der Post, Joris A M; Bossuyt, Patrick M M; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan
2017-02-01
Objective We assessed the influence of external factors on false-positive, false-negative, and invalid fibronectin results in the prediction of spontaneous delivery within 7 days. Methods We studied symptomatic women between 24 and 34 weeks' gestational age. We performed uni- and multivariable logistic regression to estimate the effect of external factors (vaginal soap, digital examination, transvaginal sonography, sexual intercourse, vaginal bleeding) on the risk of false-positive, false-negative, and invalid results, using spontaneous delivery within 7 days as the outcome. Results Out of 708 women, 237 (33%) had a false-positive result; none of the factors showed a significant association. Vaginal bleeding increased the proportion of positive fetal fibronectin (fFN) results, but was significantly associated with a lower risk of false-positive test results (odds ratio [OR], 0.22; 95% confidence intervals [CI], 0.12-0.39). Ten women (1%) had a false-negative result. None of the investigated factors was significantly associated with a significantly higher risk of false-negative results. Twenty-one tests (3%) were invalid; only vaginal bleeding showed a significant association (OR, 4.5; 95% CI, 1.7-12). Conclusion The effect of external factors on the performance of qualitative fFN testing is limited, with vaginal bleeding as the only factor that reduces its validity. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
From feedback- to response-based performance monitoring in active and observational learning.
Bellebaum, Christian; Colosio, Marco
2014-09-01
Humans can adapt their behavior by learning from the consequences of their own actions or by observing others. Gradual active learning of action-outcome contingencies is accompanied by a shift from feedback- to response-based performance monitoring. This shift is reflected by complementary learning-related changes of two ACC-driven ERP components, the feedback-related negativity (FRN) and the error-related negativity (ERN), which have both been suggested to signal events "worse than expected," that is, a negative prediction error. Although recent research has identified comparable components for observed behavior and outcomes (observational ERN and FRN), it is as yet unknown, whether these components are similarly modulated by prediction errors and thus also reflect behavioral adaptation. In this study, two groups of 15 participants learned action-outcome contingencies either actively or by observation. In active learners, FRN amplitude for negative feedback decreased and ERN amplitude in response to erroneous actions increased with learning, whereas observational ERN and FRN in observational learners did not exhibit learning-related changes. Learning performance, assessed in test trials without feedback, was comparable between groups, as was the ERN following actively performed errors during test trials. In summary, the results show that action-outcome associations can be learned similarly well actively and by observation. The mechanisms involved appear to differ, with the FRN in active learning reflecting the integration of information about own actions and the accompanying outcomes.
Evaluation of the importance of time-frequency contributions to speech intelligibility in noise
Yu, Chengzhu; Wójcicki, Kamil K.; Loizou, Philipos C.; Hansen, John H. L.; Johnson, Michael T.
2014-01-01
Recent studies on binary masking techniques make the assumption that each time-frequency (T-F) unit contributes an equal amount to the overall intelligibility of speech. The present study demonstrated that the importance of each T-F unit to speech intelligibility varies in accordance with speech content. Specifically, T-F units are categorized into two classes, speech-present T-F units and speech-absent T-F units. Results indicate that the importance of each speech-present T-F unit to speech intelligibility is highly related to the loudness of its target component, while the importance of each speech-absent T-F unit varies according to the loudness of its masker component. Two types of mask errors are also considered, which include miss and false alarm errors. Consistent with previous work, false alarm errors are shown to be more harmful to speech intelligibility than miss errors when the mixture signal-to-noise ratio (SNR) is below 0 dB. However, the relative importance between the two types of error is conditioned on the SNR level of the input speech signal. Based on these observations, a mask-based objective measure, the loudness weighted hit-false, is proposed for predicting speech intelligibility. The proposed objective measure shows significantly higher correlation with intelligibility compared to two existing mask-based objective measures. PMID:24815280
Saingam, Prakit; Li, Bo; Yan, Tao
2018-06-01
DNA-based molecular detection of microbial pathogens in complex environments is still plagued by sensitivity, specificity and robustness issues. We propose to address these issues by viewing them as inadvertent consequences of requiring specific and adequate amplification (SAA) of target DNA molecules by current PCR methods. Using the invA gene of Salmonella as the model system, we investigated if next generation sequencing (NGS) can be used to directly detect target sequences in false-negative PCR reaction (PCR-NGS) in order to remove the SAA requirement from PCR. False-negative PCR and qPCR reactions were first created using serial dilutions of laboratory-prepared Salmonella genomic DNA and then analyzed directly by NGS. Target invA sequences were detected in all false-negative PCR and qPCR reactions, which lowered the method detection limits near the theoretical minimum of single gene copy detection. The capability of the PCR-NGS approach in correcting false negativity was further tested and confirmed under more environmentally relevant conditions using Salmonella-spiked stream water and sediment samples. Finally, the PCR-NGS approach was applied to ten urban stream water samples and detected invA sequences in eight samples that would be otherwise deemed Salmonella negative. Analysis of the non-target sequences in the false-negative reactions helped to identify primer dime-like short sequences as the main cause of the false negativity. Together, the results demonstrated that the PCR-NGS approach can significantly improve method sensitivity, correct false-negative detections, and enable sequence-based analysis for failure diagnostics in complex environmental samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Reconsolidation from negative emotional pictures: is successful retrieval required?
Finn, Bridgid; Roediger, Henry L; Rosenzweig, Emily
2012-10-01
Finn and Roediger (Psychological science 22:781-786, 2011) found that when a negative emotional picture was presented immediately after a successful retrieval, later test performance was enhanced as compared to when a neutral picture or a blank screen had been shown. This finding implicates the period immediately following retrieval as playing an important role in determining later retention via reconsolidation. In two new experiments, we investigated whether successful retrieval was required to show the enhancing effect of negative emotion on later recall. In both experiments, the participants studied Swahili-English vocabulary pairs, took an intervening cued-recall test, and were given a final cued-recall test on all items. In Experiment 1, we tested a distinctiveness explanation of the effect. The results showed that neither presentation of a negative picture just prior to successful retrieval nor presentation of a positive picture after successful retrieval produced the enhancing effect that was seen when negative pictures were presented after successful retrieval. In Experiment 2, we tested whether the enhancing effect would occur when a negative picture followed an unsuccessful retrieval attempt with feedback, and a larger enhancement effect occurred after errors of commission than after errors of omission. These results indicate that effort in retrieving is critical to the enhancing effect shown with negative pictures; whether the target is produced by the participant or given by an external source following a commission error does not matter. We interpret these results as support for semantic enrichment as a key element in producing the enhancing effect of negative pictures that are presented after a retrieval attempt.
Singh, Gurmukh
2017-01-01
Background Serum free light chain assay (SFLCA) and κ/λ ratio, and protein electrophoretic methods are used in the diagnosis and monitoring of monoclonal gammopathies. Methods Results for serum free light chains, serum and urine protein electrophoreses and immunofixation electrophoreses in 468 patients with a diagnosis of monoclonal gammopathy were compared. The results of the two methods were graded as concordant, non-concordant or discordant with the established diagnoses to assess the relative performance of the methods. Results of κ/λ ratio in samples with monoclonal protein detectable by electrophoretic methods were also analyzed. Results Protein electrophoreses results were concordant with the established diagnoses significantly more often than κ/λ ratio. The false negative rate for κ/λ ratio was higher than that for electrophoretic methods. κ/λ ratio was falsely negative in about 27% of the 1,860 samples with detectable monoclonal immunoglobulin. The false negative rate was higher in lesions with lambda chains (32%) than those with kappa chains (24%). The false negative rate for κ/λ ratio was over 55% in samples with monoclonal gammopathy of undetermined significance. Even at first encounter, the false negative rates for κ/λ ratios for monoclonal gammopathy of undetermined significance, smoldering myeloma and multiple myeloma were 66.98%, 23.08%, and 30.15%, respectively, with false negative rate for lambda chain lesions being higher. Conclusions Electrophoretic studies of serum and urine are superior to SFLCA and κ/λ ratio. Abnormal κ/λ ratio, per se, is not diagnostic of monoclonal gammopathy. A normal κ/λ ratio does not exclude monoclonal gammopathy. False negative rates for lesions with lambda chain are higher than those for lesions with kappa chains. Electrophoretic studies of urine are underutilized. Clinical usefulness and medical necessity of SFLCA and κ/λ ratio is of questionable value in routine clinical testing. PMID:27924175
Singh, Gurmukh
2017-01-01
Serum free light chain assay (SFLCA) and κ/λ ratio, and protein electrophoretic methods are used in the diagnosis and monitoring of monoclonal gammopathies. Results for serum free light chains, serum and urine protein electrophoreses and immunofixation electrophoreses in 468 patients with a diagnosis of monoclonal gammopathy were compared. The results of the two methods were graded as concordant, non-concordant or discordant with the established diagnoses to assess the relative performance of the methods. Results of κ/λ ratio in samples with monoclonal protein detectable by electrophoretic methods were also analyzed. Protein electrophoreses results were concordant with the established diagnoses significantly more often than κ/λ ratio. The false negative rate for κ/λ ratio was higher than that for electrophoretic methods. κ/λ ratio was falsely negative in about 27% of the 1,860 samples with detectable monoclonal immunoglobulin. The false negative rate was higher in lesions with lambda chains (32%) than those with kappa chains (24%). The false negative rate for κ/λ ratio was over 55% in samples with monoclonal gammopathy of undetermined significance. Even at first encounter, the false negative rates for κ/λ ratios for monoclonal gammopathy of undetermined significance, smoldering myeloma and multiple myeloma were 66.98%, 23.08%, and 30.15%, respectively, with false negative rate for lambda chain lesions being higher. Electrophoretic studies of serum and urine are superior to SFLCA and κ/λ ratio. Abnormal κ/λ ratio, per se , is not diagnostic of monoclonal gammopathy. A normal κ/λ ratio does not exclude monoclonal gammopathy. False negative rates for lesions with lambda chain are higher than those for lesions with kappa chains. Electrophoretic studies of urine are underutilized. Clinical usefulness and medical necessity of SFLCA and κ/λ ratio is of questionable value in routine clinical testing.
Emotion blocks the path to learning under stereotype threat
Good, Catherine; Whiteman, Ronald C.; Maniscalco, Brian; Dweck, Carol S.
2012-01-01
Gender-based stereotypes undermine females’ performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments. PMID:21252312
Emotion blocks the path to learning under stereotype threat.
Mangels, Jennifer A; Good, Catherine; Whiteman, Ronald C; Maniscalco, Brian; Dweck, Carol S
2012-02-01
Gender-based stereotypes undermine females' performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments.
Holmes, Avram J; Pizzagalli, Diego A
2007-02-01
Emerging evidence suggests that depression is associated with executive dysfunction, particularly after committing errors or receiving negative performance feedback. To test this hypothesis, 57 participants performed two executive tasks known to elicit errors (the Simon and Stroop Tasks) during positive or negative performance feedback. Participants with elevated depressive symptoms (Beck Depression Inventory scores >or= 13) were characterized by impaired posterror and postconflict performance adjustments, especially during emotionally negative task-related feedback. Additionally, for both tasks, depressive symptoms were inversely related to postconflict reaction time adjustments following negative, but not positive, feedback. These findings suggest that subclinical depression is associated with impairments in behavioral adjustments after internal (perceived failure) and external feedback about deficient task performance. (c) 2007 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
[Roaming through methodology. XXXII. False test results].
van der Weijden, T; van den Akker, M
2001-05-12
The number of requests for diagnostic tests is rising. This leads to a higher chance of false test results. The false-negative proportion of a test is the proportion of negative test results among the diseased subjects. The false-positive proportion is the proportion of positive test results among the healthy subjects. The calculation of the false-positive proportion is often incorrect. For example, instead of 1 minus the specificity it is calculated as 1 minus the positive predictive value. This can lead to incorrect decision-making with respect to the application of the test. Physicians must apply diagnostic tests in such a way that the risk of false test results is minimal. The patient should be aware that a perfectly conclusive diagnostic test is rare in medical practice, and should more often be informed of the implications of false-positive and false-negative test results.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Will OPM compute the lost earnings if my... compute the lost earnings if my qualifying retirement coverage error was previously corrected and I made... coverage error was previously corrected, OPM will compute the lost earnings on your make-up contributions...
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Modulation of the error-related negativity by response conflict.
Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus
2009-11-01
An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.
Pailing, Patricia E; Segalowitz, Sidney J
2004-01-01
This study examines changes in the error-related negativity (ERN/Ne) related to motivational incentives and personality traits. ERPs were gathered while adults completed a four-choice letter task during four motivational conditions. Monetary incentives for finger and hand accuracy were altered across motivation conditions to either be equal or favor one type of accuracy over the other in a 3:1 ratio. Larger ERN/Ne amplitudes were predicted with increased incentives, with personality moderating this effect. Results were as expected: Individuals higher on conscientiousness displayed smaller motivation-related changes in the ERN/Ne. Similarly, those low on neuroticism had smaller effects, with the effect of Conscientiousness absent after accounting for Neuroticism. These results emphasize an emotional/evaluative function for the ERN/Ne, and suggest that the ability to selectively invest in error monitoring is moderated by underlying personality.
Dong, YiJie; Mao, MinJing; Zhan, WeiWei; Zhou, JianQiao; Zhou, Wei; Yao, JieJie; Hu, YunYun; Wang, Yan; Ye, TingJun
2018-06-01
Our goal was to assess the diagnostic efficacy of ultrasound (US)-guided fine-needle aspiration (FNA) of thyroid nodules according to size and US features. A retrospective correlation was made with 1745 whole thyroidectomy and hemithyroidectomy specimens with preoperative US-guided FNA results. All cases were divided into 5 groups according to nodule size (≤5, 5.1-10, 10.1-15, 15.1-20, and >20 mm). For target nodules, static images and cine clips of conventional US and color Doppler were obtained. Ultrasound images were reviewed and evaluated by two radiologists with at least 5 years US working experience without knowing the results of pathology, and then agreement was achieved. The Bethesda category I rate was higher in nodules larger than 15 mm (P < .05). The diagnostic accuracy was best in nodules of 5 to 10 mm in diameter. The sensitivity, accuracy, PPV, and LR for negative US-guided FNA results were better in nodules with a size range of 5 to 15 mm. The specificity, negative predictive value (NPV), and LR for positive results and the Youden index rose with increasing nodule size. Seventeen false-positive and 60 false-negative results were found in this study. The false-negative rate rose with increasing nodule size. However, the false-positive rate was highest in the group containing the smallest nodules. Nodules with circumscribed margins and those that were nonsolid and nonhypoechoic and had no microcalcifications correlated with Bethesda I FNA results. Nodules with circumscribed margins and those that were nonsolid, heterogeneous, and nonhypoechoic and had increased vascularity correlated with false-negative FNA results. Borders correlated with Bethesda I false-negative and false-positive FNA results. Tiny nodules (≤5 mm) with obscure borders tended to yield false-positive FNA results. Large nodules (>20 mm) with several US features tended to yield false-negative FNA results. © 2017 by the American Institute of Ultrasound in Medicine.
Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys
Basile, Benjamin M.; Hampton, Robert R.
2013-01-01
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernatowicz, K., E-mail: kingab@student.ethz.ch; Knopf, A.; Lomax, A.
Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CTmore » can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results: Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10{sup −19}). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%–1.4%), false positives (4.0%–2.6%), and false negatives (2.7%–1.3%). These percentage reductions correspond to gating reducing image artifacts by 24–90 cm{sup 3} of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. Conclusions: For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm{sup 3} of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.« less
Understanding the Effect of Workload on Automation Use for Younger and Older Adults
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2018-01-01
Objective This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Background Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Method Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Results Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults’ performance was negatively affected by workload, but their compliance and reliance were unaffected. Conclusion Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Application Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training. PMID:22235529
Understanding the effect of workload on automation use for younger and older adults.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2011-12-01
This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults' performance was negatively affected by workload, but their compliance and reliance were unaffected. Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training.
Gómez Palacios, Angel; Gómez Zábala, Jesús; Gutiérrez, María Teresa; Expósito, Amaya; Barrios, Borja; Zorraquino, Angel; Taibo, Miguel Angel; Iturburu, Ignacio
2006-12-01
1. To assess the sensitivity of scintigraphy using methoxy isobutyl isonitrile (MIBI). 2. To compare its resolution with that of ultrasound (US) and computerized axial tomography (CAT). 3. To use its diagnostic reliability to determine whether selective approaches can be used to treat hyperparathyroidism (HPT). A study of 76 patients who underwent surgery for HPT between 1996 and 2005 was performed. MIBI scintigraphy and cervical US were used for whole-body scanning in all patients; CAT was used in 47 patients. Intraoperative and postoperative biopsies were used for final evaluation of the tests, after visualization and surgical extirpation. The results of scintigraphy were positive in 65 patients (85.52%). The diagnosis was correct in all of the single images. Multiple images were due to hyperplasia and parathyroid adenomas with thyroid disease (5.2%). Three images, incorrectly classified as negative (3.94%), were positive. The sensitivity of US was 63% and allowed detection of three MIBI-negative adenomas (4%). CAT was less sensitive (55%), but detected a further three MIBI-negative adenomas (4%). 1. The sensitivity of MIBI reached 89.46%. In the absence of thyroid nodules, MIBI diagnosed 100% of single lesions. Pathological thyroid processes produced false-positive results (5.2%) and there were diagnostic errors (4%). 2. MIBI scintigraphy was more sensitive than US and CAT. 3. Positive, single image scintigraphy allows a selective cervical approach. US and CAT may help to save a further 8% of patients (with negative scintigraphy).
"Lost in a shopping mall" -- a breach of professional ethics.
Crook, Lynn S; Dean, Martha C
1999-01-01
The "lost in a shopping mall" study has been cited to support claims that psychotherapists can implant memories of false autobiographical information of childhood trauma in their patients. The mall study originated in 1991 as 5 pilot experiments involving 3 children and 2 adult participants. The University of Washington Human Subjects Committee granted approval for the mall study on August 10, 1992. The preliminary results with the 5 pilot subjects were announced 4 days laters. An analysis of the mall study shows that beyond the external misrepresentions, internal scientific methodological errors cast doubt on the validity of the claims that have been attributed to the mall study within scholarly and legal arenas. The minimal involvement -- or, in some cases, negative impact -- of collegial consultation, acadmic supervision, and peer review throughout the evolution of the mall study are reviewed.
Larrabee, Glenn J
2014-01-01
Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.
Schaeffel, Frank; Mathis, Ute; Brüggemann, Gunther
2007-07-01
To provide a framework for typical refractive development, as measured without cycloplegia with a commercial infrared photorefractor. To evaluate the usefulness of the screening for refractive errors, we retrospectively analyzed the data of a large number of unselected children of different ages in a pediatric practice in Tuebingen, Germany. During the standard regular preventive examinations that are performed in 80% to 90% of the young children in Germany by a pediatrician (the German "U1 to U9" system), 736 children were also measured with the first generation PowerRefractor (made by MCS, Reutlingen, Germany, but no longer available in this version). Of those, 172 were also measured with +3 D spectacles to find out whether this helps detect hyperopia. Children with more than +2 D of hyperopia or astigmatism, more than 1.5 D of anisometropia, or more than 1 D of myopia in the second year of life were referred to an eye care specialist. The actions taken by the eye care specialist were used to evaluate the merits of the screening. The average noncycloplegic spherical refractive errors in the right eyes declined linearly from +0.93 to +0.62 D over the first 6 years (p < 0.001)-between 1.5 and 0.5 D less hyperopic than in published studies with cycloplegic retinoscopy. As expected, +3 D spectacle lenses moved the refractions into the myopic direction, but this shift was not smaller in hyperopic children. The average negative cylinder magnitudes declined from -0.89 to 0.48 D (linear regression: p < 0.001). The J0 components displayed high correlations in both eyes (p < 0.001) but the J45 components did not. The average absolute anisometropias (difference of spheres) declined from 0.37 to 0.23 (linear regression: p < 0.001). Of the 736 children, 85 (11.5%) were referred to an eye care specialist. Of these, 52 received spectacles (61.2%), 14 (16.4%) were identified as "at risk" and remained under observation, and 18 (21.2%) were considered "false-positive." Non cycloplegic photorefraction provides considerably less hyperopic readings than retinoscopy under cycloplegia. Additional refractions performed through binocular +3-D lenses did not facilitate detection of hyperopia. With the referral criteria above, 11% of the children were referred to an eye care specialist, but with a 20% false-positive rate. The screening had some power to identify children at risk but the number of false-negatives remained uncertain.
True detection limits in an experimental linearly heteroscedastic system.. Part 2
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.
Joo, Yeon Kyoung; Lee-Won, Roselyn J
2016-10-01
For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.
Acetaminophen attenuates error evaluation in cortex.
Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C
2016-06-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Asymmetric affective forecasting errors and their correlation with subjective well-being
2018-01-01
Aims Social scientists have postulated that the discrepancy between achievements and expectations affects individuals' subjective well-being. Still, little has been done to qualify and quantify such a psychological effect. Our empirical analysis assesses the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being. Data We use longitudinal data on a representative sample of 13,431 individuals from the German Socio-Economic Panel. In our sample, 52% of individuals are females, average age is 43 years, average years of education is 11.4 and 27% of our sample lives in East Germany. Subjective well-being (measured by self-reported life satisfaction) is assessed on a 0–10 discrete scale and its sample average is equal to 6.75 points. Methods We develop a simple theoretical framework to assess the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being, properly accounting for the endogenous adjustment of expectations to positive and negative affective forecasting errors, and use it to derive testable predictions. Given the theoretical framework, we estimate two panel-data equations, the first depicting the association between positive and negative affective forecasting errors and the successive level of subjective well-being and the second describing the correlation between subjective well-being expectations for the future and hedonic failures and successes. Our models control for individual fixed effects and a large battery of time-varying demographic characteristics, health and socio-economic status. Results and conclusions While surpassing expectations is uncorrelated with subjective well-being, failing to match expectations is negatively associated with subsequent realizations of subjective well-being. Expectations are positively (negatively) correlated to positive (negative) forecasting errors. We speculate that in the first case the positive adjustment in expectations is strong enough to cancel out the potential positive effects on subjective well-being of beaten expectations, while in the second case it is not, and individuals persistently bear the negative emotional consequences of not achieving expectations. PMID:29513685
5 CFR 891.105 - Correction of errors.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Correction of errors. 891.105 Section 891.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) RETIRED FEDERAL EMPLOYEES HEALTH BENEFITS Administration and General Provisions § 891.105...
Zardo, Pauline; Graves, Nicholas
2018-01-01
The “publish or perish” incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have “child” labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of “child” and “parent” labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits’ efficacy. The main benefit of the audits was via the increase in effort in “child” and “parent” labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit. PMID:29649314
Barnett, Adrian G; Zardo, Pauline; Graves, Nicholas
2018-01-01
The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.
Appraisals of Negative Divorce Events and Children's Psychological Adjustment.
ERIC Educational Resources Information Center
Mazur, Elizabeth; And Others
Adding to prior literature on adults' and children's appraisals of stressors, this study examined relationships among children's negative cognitive errors regarding hypothetical negative divorce events, positive illusions about those same events, the actual divorce events, and children's post-divorce psychological adjustment. Subjects were 38…
Narita, Kazuto; Ishii, Yuuki; Vo, Phuc Thi Hong; Nakagawa, Fumiko; Ogata, Shinichi; Yamashita, Kunihiko; Kojima, Hajime; Itagaki, Hiroshi
2018-01-01
Recently, animal testing has been affected by increasing ethical, social, and political concerns regarding animal welfare. Several in vitro safety tests for evaluating skin sensitization, such as the human cell line activation test (h-CLAT), have been proposed. However, similar to other tests, the h-CLAT has produced false-negative results, including in tests for acid anhydride and water-insoluble chemicals. In a previous study, we demonstrated that the cause of false-negative results from phthalic anhydride was hydrolysis by an aqueous vehicle, with IL-8 release from THP-1 cells, and that short-time exposure to liquid paraffin (LP) dispersion medium could reduce false-negative results from acid anhydrides. In the present study, we modified the h-CLAT by applying this exposure method. We found that the modified h-CLAT is a promising method for reducing false-negative results obtained from acid anhydrides and chemicals with octanol-water partition coefficients (LogK ow ) greater than 3.5. Based on the outcomes from the present study, a combination of the original and the modified h-CLAT is suggested for reducing false-negative results. Notably, the combination method provided a sensitivity of 95% (overall chemicals) or 93% (chemicals with LogK ow > 2.0), and an accuracy of 88% (overall chemicals) or 81% (chemicals with LogK ow > 2.0). We found that the combined method is a promising evaluation scheme for reducing false-negative results seen in existing in vitro skin-sensitization tests. In the future, we expect a combination of original and modified h-CLAT to be applied in a newly developed in vitro test for evaluating skin sensitization.
False negative cytology in large thyroid nodules.
Giles, Wesley H; Maclellan, Reid A; Gawande, Atul A; Ruan, Daniel T; Alexander, Erik K; Moore, Francis D; Cho, Nancy L
2015-01-01
Controversy exists regarding the accuracy of fine-needle aspiration (FNA) in large thyroid nodules. Recent surgical series have documented false-negative rates ranging from 0.7 to 13 %. We examined the accuracy of benign FNA cytology in patients with thyroid nodules ≥3 cm who underwent surgical resection and identified features characteristic of false-negative results. We retrospectively studied all thyroidectomy specimens between January 2009 and October 2011 and identified nodules ≥3 cm with corresponding benign preoperative FNA cytology. We collected clinical information regarding patient demographics, nodule size, symptoms, sonographic features, FNA results, and final surgical pathology. For comparison, we analyzed nodules <3 cm from this cohort also with benign FNA cytology. A total of 323 nodules with benign preoperative cytology were identified. Eighty-three nodules were <3 cm, 94 nodules were 3-3.9 cm, and 146 nodules were ≥4 cm in size. The false-negative rate was 11.7 % for all nodules ≥3 cm and 4.8 % for nodules <3 cm (p = 0.03). Subgroup analysis of nodules ≥3 cm revealed a false-negative rate of 12.8 % for nodules 3-3.9 cm and 11 % for nodules ≥4 cm. Age ≥55 years and asymptomatic clinical status were the only patient characteristics that reached statistical significance as risk factors. Final pathology of the false-negative specimens consisted mainly of follicular variant of papillary thyroid cancer and follicular thyroid cancer. When referred for thyroidectomy, patients with large thyroid nodules demonstrate a modest, yet significant, false-negative rate despite initial benign aspiration cytology. Therefore, thyroid nodules ≥3 cm may be considered for removal even when referred with benign preoperative cytology.
Can missed breast cancer be recognized by regular peer auditing on screening mammography?
Pan, Huay-Ben; Yang, Tsung-Lung; Hsu, Giu-Cheng; Chiang, Chia-Ling; Huang, Jer-Shyung; Chou, Chen-Pin; Wang, Yen-Chi; Liang, Huei-Lung; Lee, San-Kan; Chou, Yi-Hong; Wong, Kam-Fai
2012-09-01
This study was conducted to investigate whether detectable missed breast cancers could be distinguished from truly false negative images in a mammographic screening by a regular peer auditing. Between 2004 and 2007, a total of 311,193 free nationwide biennial mammographic screenings were performed for 50- to 69-year-old women in Taiwan. Retrospectively comparing the records in Taiwan's Cancer registry, 1283 cancers were detected (4.1 per 1000). Of the total, 176 (0.6 per 1000) initial mammographic negative assessments were reported to have cancers (128 traditional films and 48 laser-printed digital images). We selected 186 true negative films (138 traditional films and 48 laser-printed ones) as control group. These were seeded into 4815 films of 2008 images to be audited in 2009. Thirty-four auditors interpreted all the films in a single-blind, randomized, pair-control study. The performance of 34 auditors was analyzed by chi-square test. A p value of < 0.05 was considered significant. Eight (6 traditional and 2 digital films) of the 176 false negative films were not reported by the auditors (missing rate of 4.5%). Of this total, 87 false negatives were reassessed as positive, while 29 of the 186 true negatives were reassessed as positive, making the overall performance of the 34 auditors in interpreting the false negatives and true negatives a specificity of 84.4% and sensitivity of 51.8%. The specificity and sensitivity in traditional films and laser-printed films were 98.6% versus 43.8% and 41.8% versus 78.3%, respectively. Almost 42% of the traditional false negative films had positive reassessment by the auditors, showing a significant difference from the initial screeners (p < 0.001). The specificity of their reinterpretation of laser-printed films was obviously low. Almost 42% of the false negative traditional films were judged as missed cancers in this study. A peer auditing should reduce the probability of missed cancers. 2012 Published by Elsevier B.V
Medical errors; causes, consequences, emotional response and resulting behavioral change
Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed
2016-01-01
Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682
A statistical model of false negative and false positive detection of phase singularities.
Jacquemet, Vincent
2017-10-01
The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Paek, Se Hyun; Kim, Byung Seup; Kang, Kyung Ho; Kim, Hee Sung
2017-11-13
The BRAF V600E mutation is highly specific for papillary thyroid carcinoma (PTC). A test for this mutation can increase the diagnostic accuracy of fine-needle aspiration cytology (FNAC), but a considerably high false-negative rate for the BRAF V600E mutation on FNAC has been reported. In this study, we investigated the risk factors associated with false-negative BRAF V600E mutation results on FNAC. BRAF V600E mutation results of 221 PTC nodules between December 2011 and June 2013 were retrospectively reviewed. BRAF V600E mutation results on both preoperative FNAC and postoperative formalin-fixed, paraffin-embedded (FFPE) samples were compared. We investigated the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of BRAF V600E mutation results on FNAC. And, we identified the risk factors associated with false-negative results. Of 221 PTC nodules, 150 (67.9%) on FNAC and 185 (83.7%) on FFPE samples were BRAF V600E mutation positive. The sensitivity, specificity, PPV, and NPV for BRAF V600E mutation testing with FNAC were 80.5, 97.2, 99.3, and 49.3%, respectively. Thirty-six (16.3%) BRAF V600E mutation-negative nodules on FNAC were mutation positive on FFPE sample analysis. Risk factors for these false-negative results were age, indeterminate FNAC results (nondiagnostic, atypia of undetermined significance (AUS), and findings suspicious for PTC), and PTC subtype. False-negative rate of BRAF mutation testing with FNAC for thyroid nodules is increased in cases of old age, indeterminate FNAC pathology results, and certain PTC subtypes. Therapeutic surgery can be considered for these cases. A well-designed prospective study with informed consent of patients will be essential for more informative results.
Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.
Jensen, P S; Watanabe, H
1999-02-01
To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.
Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.
Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan
2018-02-01
Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia.
Rui, Y; Han, M; Zhou, W; He, Q; Li, H; Li, P; Zhang, F; Shi, Y; Su, X
2018-06-06
To determine true negatives and characterise the variables associated with false-negative results when interpreting non-malignant results of computed tomography (CT)-guided lung biopsy. Nine hundred and fifty patients with initial non-malignant findings on their first transthoracic CT-guided core-needle biopsy (TTNB) were included in the study. Initial biopsy results were compared to definitive diagnoses established later. The negative predictive value (NPV) of non-malignant diseases upon initial TTNB was 83.6%. When the biopsy results indicated specific infection or benign tumour (n=225, 26.1%), they all were confirmed true negative for malignancy later. Only one inconclusive "granuloma" diagnosis was false negative. All 141 patients (141/861, 16.4%) who were false negative for malignancy were from the "infection not otherwise specified (NOS)", "inflammatory diseases", or "inconclusive" groups. Age (p=0.002), cancer history (p<0.001), target size (p=0.003), and pneumothorax during lung biopsy (p=0.003) were found to be significant predictors of false-negative results; 47.6% (410/861) of patients underwent additional invasive examinations to reach a final diagnosis. Ultimately, 52.7% (216/410) were successfully diagnosed. Specific infection, benign tumour, and granulomatous inflammation of first TTNBs were mostly true negative. Older age, history of cancer, larger target size, and pneumothorax were highly predictive of false-negative results for malignancies. In such cases, additional invasive examinations were frequently necessary to obtain final diagnoses. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Yang, Chi; Zhang, Shaojun; Yao, Lan; Fan, Lin
2018-05-01
Objective To investigate the diagnostic efficacy of an interferon-γ release assay, T-SPOT ® . TB, for diagnosing active tuberculosis (TB) and to identify risk factors for false-negative results. Methods This retrospective study enrolled consecutive patients with active TB and with non-TB respiratory diseases to evaluate the risk factors for false-negative results when using the T-SPOT ® . TB assay for the diagnosis of active TB. Patients with active TB were categorized as having confirmed pulmonary TB, clinically diagnosed pulmonary TB or extrapulmonary TB (EPTB). Results This study analysed 4964 consecutive patients; 2425 with active TB and 2539 with non-TB respiratory diseases. Multivariate logistic regression analyses identified the following five factors that were all associated with an increased false-negative rate with the T-SPOT ® . TB assay: increased age (odds ratio [OR] 1.018; 95% confidence interval [CI] 1.013, 1.024); decreased CD8+ count (OR 0.307; 95% CI 0.117, 0.803); negative sputum acid-fast bacilli (AFB) smear staining (OR 1.821; 95% CI 1.338, 2.477); negative mycobacterial cultures (OR 1.379; 95% CI 1.043, 1.824); and absence of EPTB (OR 1.291; 95% CI 1.026, 1.623). Conclusions Increased age, decreased CD8+ count, negative sputum AFB smear results, negative sputum mycobacterial cultures and absence of EPTB might lead to an increased false-negative rate when using the T-SPOT ® . TB assay.
Where Have All the Interactions Gone? Estimating the Coverage of Two-Hybrid Protein Interaction Maps
Huang, Hailiang; Jedynak, Bruno M; Bader, Joel S
2007-01-01
Yeast two-hybrid screens are an important method for mapping pairwise physical interactions between proteins. The fraction of interactions detected in independent screens can be very small, and an outstanding challenge is to determine the reason for the low overlap. Low overlap can arise from either a high false-discovery rate (interaction sets have low overlap because each set is contaminated by a large number of stochastic false-positive interactions) or a high false-negative rate (interaction sets have low overlap because each misses many true interactions). We extend capture–recapture theory to provide the first unified model for false-positive and false-negative rates for two-hybrid screens. Analysis of yeast, worm, and fly data indicates that 25% to 45% of the reported interactions are likely false positives. Membrane proteins have higher false-discovery rates on average, and signal transduction proteins have lower rates. The overall false-negative rate ranges from 75% for worm to 90% for fly, which arises from a roughly 50% false-negative rate due to statistical undersampling and a 55% to 85% false-negative rate due to proteins that appear to be systematically lost from the assays. Finally, statistical model selection conclusively rejects the Erdös-Rényi network model in favor of the power law model for yeast and the truncated power law for worm and fly degree distributions. Much as genome sequencing coverage estimates were essential for planning the human genome sequencing project, the coverage estimates developed here will be valuable for guiding future proteomic screens. All software and datasets are available in Datasets S1 and S2, Figures S1–S5, and Tables S1−S6, and are also available from our Web site, http://www.baderzone.org. PMID:18039026
Wu, Shan; Zhang, Xiaofeng; Shuai, Jiangbing; Li, Ke; Yu, Huizhen; Jin, Chenchen
2016-07-04
To simplify the PNA-FISH (Peptide nucleic acid-fluorescence in situ hybridization) test, molecular beacon based PNA probe combined with fluorescence scanning detection technology was applied to replace the original microscope observation to detect Listeria monocytogenes The 5′ end and 3′ end of the L. monocytogenes specific PNA probes were labeled with the fluorescent group and the quenching group respectively, to form a molecular beacon based PNA probe. When PNA probe used for fluorescence scanning and N1 treatment as the control, the false positive rate was 11.4%, and the false negative rate was 0; when N2 treatment as the control, the false positive rate decreased to 4.3%, but the false negative rate rose to 18.6%. When beacon based PNA probe used for fluorescence scanning, taken N1 treatment as blank control, the false positive rate was 8.6%, and the false negative rate was 1.4%; taken N2 treatment as blank control, the false positive rate was 5.7%, and the false negative rate was 1.4%. Compared with PNA probe, molecular beacon based PNA probe can effectively reduce false positives and false negatives. The success rates of hybridization of the two PNA probes were 83.3% and 95.2% respectively; and the rates of the two beacon based PNA probes were 91.7% and 90.5% respectively, which indicated that labeling the both ends of the PNA probe dose not decrease the hybridization rate with the target bacteria. The combination of liquid phase PNA-FISH and fluorescence scanning method, can significantly improve the detection efficiency.
Otgaar, Henry; Howe, Mark L; Muris, Peter
2017-09-01
We examined the creation of spontaneous and suggestion-induced false memories in maltreated and non-maltreated children. Maltreated and non-maltreated children were involved in a Deese-Roediger-McDermott false memory paradigm where they studied and remembered negative and neutral word lists. Suggestion-induced false memories were created using a misinformation procedure during which both maltreated and non-maltreated children viewed a negative video (i.e., bank robbery) and later received suggestive misinformation concerning the event. Our results showed that maltreated children had higher levels of spontaneous negative false memories but lower levels of suggestion-induced false memories as compared to non-maltreated children. Collectively, our study demonstrates that maltreatment both increases and decreases susceptibility to memory illusions depending on the type of false memory being induced. Statement of contribution What is already known on this subject? Trauma affects memory. It is unclear how trauma affects false memory. What does this study add? This study focuses on two types of false memories. © 2017 The Authors. British Journal of Developmental Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Empirical Validation of Pooled Whole Genome Population Re-Sequencing in Drosophila melanogaster
Zhu, Yuan; Bergland, Alan O.; González, Josefa; Petrov, Dmitri A.
2012-01-01
The sequencing of pooled non-barcoded individuals is an inexpensive and efficient means of assessing genome-wide population allele frequencies, yet its accuracy has not been thoroughly tested. We assessed the accuracy of this approach on whole, complex eukaryotic genomes by resequencing pools of largely isogenic, individually sequenced Drosophila melanogaster strains. We called SNPs in the pooled data and estimated false positive and false negative rates using the SNPs called in individual strain as a reference. We also estimated allele frequency of the SNPs using “pooled” data and compared them with “true” frequencies taken from the estimates in the individual strains. We demonstrate that pooled sequencing provides a faithful estimate of population allele frequency with the error well approximated by binomial sampling, and is a reliable means of novel SNP discovery with low false positive rates. However, a sufficient number of strains should be used in the pooling because variation in the amount of DNA derived from individual strains is a substantial source of noise when the number of pooled strains is low. Our results and analysis confirm that pooled sequencing is a very powerful and cost-effective technique for assessing of patterns of sequence variation in populations on genome-wide scales, and is applicable to any dataset where sequencing individuals or individual cells is impossible, difficult, time consuming, or expensive. PMID:22848651
The impact of non-concordant self-report of substance use in clinical trials research.
Clark, C Brendan; Zyambo, Cosmas M; Li, Ye; Cropsey, Karen L
2016-07-01
Studies comparing self-report substance use data to biochemical verification generally demonstrate high rates of concordance. We argue that these rates are due to the relatively high true negative rate in the general population, and high degree of honestly in treatment seeking individuals. We hypothesized that high risk individuals not seeking treatment would demonstrate low concordance and a high false negative rate of self-reported substance use. A sample of 500 individuals from a smoking cessation clinical trial was assessed over 1 year. Assessments included semi-structured interviews, questionnaires (e.g. Addiction Severity Index, etc.), and urine drug screen assays (UDS). Generalized estimating equations (GEEs) were used to predict false negative reports for various substances across the study and determine the influence of substance use on the primary study outcome of smoking cessation. Participants demonstrated high false negative rates in reporting substances use, and the false negative rates increased as the study progressed. Established predictors of false negatives generalized to the current sample. High concordance and low false negative rates were found in self-report of nicotine use. A small but significant relationship was found in for effect of biochemically verified substance use on smoking cessation. Biochemical verification of substance use is needed in high risk populations involved in studies not directly related to the treatment of substance use, especially in populations with high threat of stigmatization. Testing should continue through the time period of the study for maximal identification of substance use. Copyright © 2016 Elsevier Ltd. All rights reserved.
There was not, they did not: May negation cause the negated ideas to be remembered as existing?
2017-01-01
In this article we demonstrate that negation of ideas can have paradoxical effects, possibly leading the listener to believe that the negated ideas actually existed. In Experiment 1, participants listened to a description of a house, in which some objects were mentioned, some were negated, and some were not mentioned at all. When questioned about the existence of these objects a week later, the participants gave more false positives for items that were negated in the original material than for items that were not mentioned at all, an effect we call negation related false memories (NRFM). The NRFM effect was replicated again in Experiment 2 with a sample of five and six year-old children. Experiment 3 confirmed NRFM in the case of negated actions. The results are discussed in terms of retention hypothesis, as well as the theory that negation can activate a representation of an entity and behaviour. It is also indicated that future research is needed to ensure that it is indeed negation which caused false alarms, not merely mentioning an object. PMID:28448549
Tamkus, Arvydas A; Rice, Kent S; McCaffrey, Michael T
2018-02-01
Although some authors have published case reports describing false negatives in intraoperative neurophysiological monitoring (IONM), a systematic review of causes of false-negative IONM results is lacking. The objective of this study was to analyze false-negative IONM findings in spine surgery. This is a retrospective cohort analysis. A cohort of 109 patients with new postoperative neurologic deficits was analyzed for possible false-negative IONM reporting. The causes of false-negative IONM reporting were determined. From a cohort of 62,038 monitored spine surgeries, 109 consecutive patients with new postoperative neurologic deficits were reviewed for IONM alarms. Intraoperative neurophysiological monitoring alarms occurred in 87 of 109 surgeries. Nineteen patients with new postoperative neurologic deficits did not have an IONM alarm and surgeons were not warned. In addition, three patients had no interpretable IONM baseline data and no alarms were possible for the duration of the surgery. Therefore, 22 patients were included in the study. The absence of IONM alarms during these 22 surgeries had different origins: "true" false negatives where no waveform changes meeting the alarm criteria occurred despite the appropriate IONM (7); a postoperative development of a deficit (6); failure to monitor the pathway, which became injured (5); the absence of interpretable IONM baseline data which precluded any alarm (3); and technical IONM application issues (1). Overall, the rate of IONM method failing to predict the patient's outcome was very low (0.04%, 22/62,038). Minimizing false negatives requires the application of a proper IONM technique with the limitations of each modality considered in their selection and interpretation. Multimodality IONM provides the most inclusive information, and although it might be impractical to monitor every neural structure that can be at risk, a thorough preoperative consideration of available IONM modalities is important. Delayed development of postoperative deficits cannot be predicted by IONM. Absent baseline IONM data should be treated as an alarm when inconsistent with the patient's preoperative neurologic status. Alarm criteria for IONM may need to be refined for specific procedures and deserves continued study. Copyright © 2017 Elsevier Inc. All rights reserved.
Benau, Erik M; Moelter, Stephen T
2016-09-01
The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Kulengowski, Brandon; Brignola, Matthew; Gallagher, Chanah; Rutter, W Cliff; Ribes, Julie A; Burgess, David S
2017-01-01
Abstract Background Polymyxins are being revitalized to combat carbapenem-resistant Enterobacteriaceae (CRE). However, evaluating the activity of these agents by traditional broth dilution methods is not practical for busy clinical laboratories. We compared polymyxin B (PMB) activity utilizing two quantitative susceptibility testing methods, Etest® and broth microdilution (BMD), against CRE isolates from patients at an academic medical center. Methods PMB activity against 70 recent CRE clinical isolates was determined by BMD and Etest® according to CLSI guidelines. P. aeruginosa ATCC® 27853 was used as a quality control strain. The CLSI PMB susceptibility breakpoint of non-fermenting gram-negative bacteria (<2 mg/L) was used. Essential agreement between methods was defined as an MIC measured within 1 log2 dilution. Categorical agreement was defined between methods as classification of isolates in the same susceptibility category (susceptible or resistant). Major and very major error rates were calculated, and McNemar’s test was used for determining a difference between methods. Results CRE isolates were primarily Enterobacter spp. (43%), followed by K. pneumoniae (41%) and E. coli (9%). Essential agreement between testing methods was low (9%), but categorical agreement was 81% (P = 0.0002). Although false non-susceptibility was never observed by Etest® (BMD as reference), the rate of very major errors by Etest® was high (19%). Etest® miscalled 87% of PMB-resistant CRE. Conclusion Etest® reporting of false susceptibility may result in inappropriate antibiotic utilization and treatment failure clinically. We do not recommend using Etest® for PMB susceptibility testing for routine patient care. Disclosures All authors: No reported disclosures.
Fat segmentation on chest CT images via fuzzy models
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Wu, Caiyun; Pednekar, Gargi; Subramanian, Janani Rajan; Lederer, David J.; Christie, Jason; Torigian, Drew A.
2016-03-01
Quantification of fat throughout the body is vital for the study of many diseases. In the thorax, it is important for lung transplant candidates since obesity and being underweight are contraindications to lung transplantation given their associations with increased mortality. Common approaches for thoracic fat segmentation are all interactive in nature, requiring significant manual effort to draw the interfaces between fat and muscle with low efficiency and questionable repeatability. The goal of this paper is to explore a practical way for the segmentation of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) components of chest fat based on a recently developed body-wide automatic anatomy recognition (AAR) methodology. The AAR approach involves 3 main steps: building a fuzzy anatomy model of the body region involving all its major representative objects, recognizing objects in any given test image, and delineating the objects. We made several modifications to these steps to develop an effective solution to delineate SAT/VAT components of fat. Two new objects representing interfaces of SAT and VAT regions with other tissues, SatIn and VatIn are defined, rather than using directly the SAT and VAT components as objects for constructing the models. A hierarchical arrangement of these new and other reference objects is built to facilitate their recognition in the hierarchical order. Subsequently, accurate delineations of the SAT/VAT components are derived from these objects. Unenhanced CT images from 40 lung transplant candidates were utilized in experimentally evaluating this new strategy. Mean object location error achieved was about 2 voxels and delineation error in terms of false positive and false negative volume fractions were, respectively, 0.07 and 0.1 for SAT and 0.04 and 0.2 for VAT.
Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A
2011-04-05
Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.
Nock, Nl; Zhang, Lx
2011-11-29
Methods that can evaluate aggregate effects of rare and common variants are limited. Therefore, we applied a two-stage approach to evaluate aggregate gene effects in the 1000 Genomes Project data, which contain 24,487 single-nucleotide polymorphisms (SNPs) in 697 unrelated individuals from 7 populations. In stage 1, we identified potentially interesting genes (PIGs) as those having at least one SNP meeting Bonferroni correction using univariate, multiple regression models. In stage 2, we evaluate aggregate PIG effects on trait, Q1, by modeling each gene as a latent construct, which is defined by multiple common and rare variants, using the multivariate statistical framework of structural equation modeling (SEM). In stage 1, we found that PIGs varied markedly between a randomly selected replicate (replicate 137) and 100 other replicates, with the exception of FLT1. In stage 1, collapsing rare variants decreased false positives but increased false negatives. In stage 2, we developed a good-fitting SEM model that included all nine genes simulated to affect Q1 (FLT1, KDR, ARNT, ELAV4, FLT4, HIF1A, HIF3A, VEGFA, VEGFC) and found that FLT1 had the largest effect on Q1 (βstd = 0.33 ± 0.05). Using replicate 137 estimates as population values, we found that the mean relative bias in the parameters (loadings, paths, residuals) and their standard errors across 100 replicates was on average, less than 5%. Our latent variable SEM approach provides a viable framework for modeling aggregate effects of rare and common variants in multiple genes, but more elegant methods are needed in stage 1 to minimize type I and type II error.
Faciszewski, T; Broste, S K; Fardon, D
1997-10-01
The purpose of the present study was to evaluate the accuracy of data regarding diagnoses of spinal disorders in administrative databases at eight different institutions. The records of 189 patients who had been managed for a disorder of the lumbar spine were independently reviewed by a physician who assigned the appropriate diagnostic codes according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). The age range of the 189 patients was seventeen to eighty-four years. The six major diagnostic categories studied were herniation of a lumbar disc, a previous operation on the lumbar spine, spinal stenosis, cauda equina syndrome, acquired spondylolisthesis, and congenital spondylolisthesis. The diagnostic codes assigned by the physician were compared with the codes that had been assigned during the ordinary course of events by personnel in the medical records department of each of the eight hospitals. The accuracy of coding was also compared among the eight hospitals, and it was found to vary depending on the diagnosis. Although there were both false-negative and false-positive codes at each institution, most errors were related to the low sensitivity of coding for previous spinal operations: only seventeen (28 per cent) of sixty-one such diagnoses were coded correctly. Other errors in coding were less frequent, but their implications for conclusions drawn from the information in administrative databases depend on the frequency of a diagnosis and its importance in an analysis. This study demonstrated that the accuracy of a diagnosis of a spinal disorder recorded in an administrative database varies according to the specific condition being evaluated. It is necessary to document the relative accuracy of specific ICD-9-CM diagnostic codes in order to improve the ability to validate the conclusions derived from investigations based on administrative databases.
Al Hirschfeld's NINA as a prototype search task for studying perceptual error in radiology
NASA Astrophysics Data System (ADS)
Nodine, Calvin F.; Kundel, Harold L.
1997-04-01
Artist Al Hirschfeld has been hiding the word NINA (his daughter's name) in line drawings of theatrical scenes that have appeared in the New York Times for over 50 years. This paper shows how Hirschfeld's search task of finding the name NINA in his drawings illustrates basic perceptual principles of detection, discrimination and decision-making commonly encountered in radiology search tasks. Hirschfeld's hiding of NINA is typically accomplished by camouflaging the letters of the name and blending them into scenic background details such as wisps of hair and folds of clothing. In a similar way, pulmonary nodules and breast lesions are camouflaged by anatomic features of the chest or breast image. Hirschfeld's hidden NINAs are sometimes missed because they are integrated into a Gestalt overview rather than differentiated from background features during focal scanning. This may be similar to overlooking an obvious nodule behind the heart in a chest x-ray image. Because it is a search game, Hirschfeld assigns a number to each drawing to indicate how many NINAs he has hidden so as not to frustrate his viewers. In the radiologists' task, the number of targets detected in a medical image is determined by combining perceptual input with probabilities generated from clinical history and viewing experience. Thus, in the absence of truth, searching for abnormalities in x-ray images creates opportunities for recognition and decision errors (e.g. false positives and false negatives). We illustrate how camouflage decreases the conspicuity of both artistic and radiographic targets, compare detection performance of radiologists with lay persons searching for NINAs, and, show similarities and differences between scanning strategies of the two groups based on eye-position data.
Tracht, Jessica M; Davis, Antoinette D; Fasciano, Danielle N; Eltoum, Isam-Eldin A
2017-10-01
The objective of this study was to compare cervical high-grade squamous intraepithelial lesions subcategorized as cervical intraepithelial neoplasia-3 (CIN-3)-positive after a negative cytology result but positive for high-risk human papillomavirus (HR-HPV) testing to those with a negative HR-HPV test but positive cytology (atypical squamous cells of undetermined significance [ASCUS]-positive/HPV-negative) and to assess reasons for discrepancies. The authors retrospectively analyzed women who underwent screening with cytology and HPV testing from 2010 through 2013. After a review of surgical specimens and cytology, discrepancies were classified as sampling or interpretation error. Clinical and pathologic findings were compared. In total, 15,173 women (age range, 25-95 years; 7.1% were aged < 30 years) underwent both HPV and cytologic testing, and 1184 (8.4%) underwent biopsy. Cytology was positive in 19.4% of specimens, and HPV was positive in 14.5%. Eighty-four CIN-3-positive specimens were detected, including 55 that tested ASCUS-positive/HPV-positive, 11 that tested negative for intraepithelial lesion or malignancy (NILM)/HPV-positive, 10 that tested ASCUS-positive/HPV-negative, 3 that tested NILM/HPV-negative, and 5 tests that were unsatisfactory. There was no significant difference between NILM/HPV-positive and ASCUS-positive/HPV-negative CIN-3 in terms of size, time to occurrence, the presence of a cytopathic effect, screening history, race, or age. Six of 11 NILM/HPV-positive cases were reclassified as ASCUS, indicating an interpreting error of 55% and a sampling error of 45%. No ASCUS-positive/HPV-negative cases were reclassified. Seven cases of CIN-3 with positive cytology were HPV-negative. There are no significant clinical or pathologic differences between NILM/HPV-positive and ASCUS-positive/HPV-negative CIN-3-positive specimens. Cytologic sampling or interpretation remains the main reason for discrepancies. However, HPV-negative CIN-3 with positive cytology exists and may be missed by primary HPV screening. Cancer Cytopathol 2017;125:795-805. © 2017 American Cancer Society. © 2017 American Cancer Society.
Response cost, reinforcement, and children's Porteus Maze qualitative performance.
Neenan, D M; Routh, D K
1986-09-01
Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.
Weinstein, Susan P.; McDonald, Elizabeth S.; Conant, Emily F.
2016-01-01
Digital breast tomosynthesis (DBT) represents a valuable addition to breast cancer screening by decreasing recall rates while increasing cancer detection rates. The increased accuracy achieved with DBT is due to the quasi–three-dimensional format of the reconstructed images and the ability to “scroll through” breast tissue in the reconstructed images, thereby reducing the effect of tissue superimposition found with conventional planar digital mammography. The margins of both benign and malignant lesions are more conspicuous at DBT, which allows improved lesion characterization, increased reader confidence, and improved screening outcomes. However, even with the improvements in accuracy achieved with DBT, there remain differences in breast cancer conspicuity by mammographic view. Early data suggest that breast cancers may be more conspicuous on craniocaudal (CC) views than on mediolateral oblique (MLO) views. While some very laterally located breast cancers may be visualized on only the MLO view, the increased conspicuity of cancers on the CC view compared with the MLO view suggests that DBT screening should be performed with two-view imaging. Even with the improved conspicuity of lesions at DBT, there may still be false-negative studies. Subtle lesions seen on only one view may be discounted, and dense and/or complex tissue patterns may make some cancers occult or extremely difficult to detect. Therefore, radiologists should be cognizant of both perceptual and cognitive errors to avoid potential pitfalls in lesion detection and characterization. ©RSNA, 2016 Online supplemental material is available for this article. PMID:27715711
The Lancet Weight Determines Wheal Diameter in Response to Skin Prick Testing with Histamine.
Andersen, Hjalte H; Lundgaard, Anna Charlotte; Petersen, Anne S; Hauberg, Lise E; Sharma, Neha; Hansen, Sofie D; Elberling, Jesper; Arendt-Nielsen, Lars
2016-01-01
Skin prick test (SPT) is a common test for diagnosing immunoglobulin E-mediated allergies. In clinical routine, technicalities, human errors or patient-related biases, occasionally results in suboptimal diagnosis of sensitization. Although not previously assessed qualitatively, lancet weight is hypothesized to be important when performing SPT to minimize the frequency of false positives, false negatives, and unwanted discomfort. Accurate weight-controlled SPT was performed on the volar forearms and backs of 20 healthy subjects. Four predetermined lancet weights were applied (25 g, 85 g, 135 g and 265 g) using two positive control histamine solutions (1 mg/mL and 10 mg/mL) and one negative control (saline). A total of 400 SPTs were conducted. The outcome parameters were: wheal size, neurogenic inflammation (measured by superficial blood perfusion), frequency of bleeding, and the lancet provoked pain response. The mean wheal diameter increased significantly as higher weights were applied to the SPT lancet, e.g. from 3.2 ± 0.28 mm at 25 g to 5.4 ± 1.7 mm at 265 g (p<0.01). Similarly, the frequency of bleeding, the provoked pain, and the neurogenic inflammatory response increased significantly. At 265 g saline evoked two wheal responses (/160 pricks) below 3 mm. The applied weight of the lancet during the SPT-procedure is an important factor. Higher lancet weights precipitate significantly larger wheal reactions with potential diagnostic implications. This warrants additional research of the optimal lancet weight in relation to SPT-guidelines to improve the specificity and sensitivity of the procedure.
The Lancet Weight Determines Wheal Diameter in Response to Skin Prick Testing with Histamine
Andersen, Hjalte H.; Elberling, Jesper; Arendt-Nielsen, Lars
2016-01-01
Background Skin prick test (SPT) is a common test for diagnosing immunoglobulin E-mediated allergies. In clinical routine, technicalities, human errors or patient-related biases, occasionally results in suboptimal diagnosis of sensitization. Objective Although not previously assessed qualitatively, lancet weight is hypothesized to be important when performing SPT to minimize the frequency of false positives, false negatives, and unwanted discomfort. Methods Accurate weight-controlled SPT was performed on the volar forearms and backs of 20 healthy subjects. Four predetermined lancet weights were applied (25 g, 85 g, 135 g and 265 g) using two positive control histamine solutions (1 mg/mL and 10 mg/mL) and one negative control (saline). A total of 400 SPTs were conducted. The outcome parameters were: wheal size, neurogenic inflammation (measured by superficial blood perfusion), frequency of bleeding, and the lancet provoked pain response. Results The mean wheal diameter increased significantly as higher weights were applied to the SPT lancet, e.g. from 3.2 ± 0.28 mm at 25 g to 5.4 ± 1.7 mm at 265 g (p<0.01). Similarly, the frequency of bleeding, the provoked pain, and the neurogenic inflammatory response increased significantly. At 265 g saline evoked two wheal responses (/160 pricks) below 3 mm. Conclusion and clinical relevance The applied weight of the lancet during the SPT-procedure is an important factor. Higher lancet weights precipitate significantly larger wheal reactions with potential diagnostic implications. This warrants additional research of the optimal lancet weight in relation to SPT-guidelines to improve the specificity and sensitivity of the procedure. PMID:27213613
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Relationship between lenticular power and refractive error in children with hyperopia.
Tomomatsu, Takeshi; Kono, Shinjiro; Arimura, Shogo; Tomomatsu, Yoko; Matsumura, Takehiro; Takihara, Yuji; Inatani, Masaru; Takamura, Yoshihiro
2013-01-01
To evaluate the contribution of axial length, and lenticular and corneal power to the spherical equivalent refractive error in children with hyperopia between 3 and 13 years of age, using noncontact optical biometry. There were 62 children between 3 and 13 years of age with hyperopia (+2 diopters [D] or more) who underwent automated refraction measurement with cycloplegia, to measure spherical equivalent refractive error and corneal power. Axial length was measured using an optic biometer that does not require contact with the cornea. The refractive power of the lens was calculated using the Sanders-Retzlaff-Kraff formula. Single regression analysis was used to evaluate the correlation among the optical parameters. There was a significant positive correlation between age and axial length (P = 0.0014); however, the degree of hyperopia did not decrease with aging (P = 0.59). There was a significant negative correlation between age and the refractive power of the lens (P = 0.0001) but not that of the cornea (P = 0.43). A significant negative correlation was observed between the degree of hyperopia and lenticular power (P < 0.0001). Although this study is small scale and cross sectional, the analysis, using noncontact biometry, showed that lenticular power was negatively correlated with refractive error and age, indicating that lower lens power may contribute to the degree of hyperopia.
Contour-Based Corner Detection and Classification by Using Mean Projection Transform
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-01-01
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354
Mandelker, Diana; Schmidt, Ryan J; Ankala, Arunkanth; McDonald Gibson, Kristin; Bowser, Mark; Sharma, Himanshu; Duffy, Elizabeth; Hegde, Madhuri; Santani, Avni; Lebo, Matthew; Funke, Birgit
2016-12-01
Next-generation sequencing (NGS) is now routinely used to interrogate large sets of genes in a diagnostic setting. Regions of high sequence homology continue to be a major challenge for short-read technologies and can lead to false-positive and false-negative diagnostic errors. At the scale of whole-exome sequencing (WES), laboratories may be limited in their knowledge of genes and regions that pose technical hurdles due to high homology. We have created an exome-wide resource that catalogs highly homologous regions that is tailored toward diagnostic applications. This resource was developed using a mappability-based approach tailored to current Sanger and NGS protocols. Gene-level and exon-level lists delineate regions that are difficult or impossible to analyze via standard NGS. These regions are ranked by degree of affectedness, annotated for medical relevance, and classified by the type of homology (within-gene, different functional gene, known pseudogene, uncharacterized noncoding region). Additionally, we provide a list of exons that cannot be analyzed by short-amplicon Sanger sequencing. This resource can help guide clinical test design, supplemental assay implementation, and results interpretation in the context of high homology.Genet Med 18 12, 1282-1289.
Contour-based corner detection and classification by using mean projection transform.
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-02-28
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.
Mossman, Douglas
2013-01-01
The last two decades have witnessed major changes in the way that mental health professionals assess, describe, and think about persons' risk for future violence. Psychiatrists and psychologists have gone from believing that they could not predict violence to feeling certain they can assess violence risk with well-above-chance accuracy. Receiver operating characteristic (ROC) analysis has played a central role in changing this view. This article reviews the key concepts underlying ROC methods, the meaning of the area under the ROC curve (AUC), the relationship between AUC and effect size d, and what these two indices tell us about evaluations of violence risk. The area under the ROC curve and d provide succinct but incomplete descriptions of discrimination capacity. These indices do not provide details about sensitivity-specificity trade-offs; they do not tell us how to balance false-positive and false-negative errors; and they do not determine whether a diagnostic system is accurate enough to make practically useful distinctions between violent and non-violent subject groups. Justifying choices or clinical practices requires a contextual investigation of outcomes, a process that takes us beyond simply knowing global indices of accuracy. Copyright © 2013 John Wiley & Sons, Ltd.
Evans, Karla K.; Birdwell, Robyn L.; Wolfe, Jeremy M.
2013-01-01
Mammography is an important tool in the early detection of breast cancer. However, the perceptual task is difficult and a significant proportion of cancers are missed. Visual search experiments show that miss (false negative) errors are elevated when targets are rare (low prevalence) but it is unknown if low prevalence is a significant factor under real world, clinical conditions. Here we show that expert mammographers in a real, low-prevalence, clinical setting, miss a much higher percentage of cancers than are missed when the mammographers search for the same cancers under high prevalence conditions. We inserted 50 positive and 50 negative cases into the normal workflow of the breast cancer screening service of an urban hospital over the course of nine months. This rate was slow enough not to markedly raise disease prevalence in the radiologists’ daily practice. Six radiologists subsequently reviewed all 100 cases in a session where the prevalence of disease was 50%. In the clinical setting, participants missed 30% of the cancers. In the high prevalence setting, participants missed just 12% of the same cancers. Under most circumstances, this low prevalence effect is probably adaptive. It is usually wise to be conservative about reporting events with very low base rates (Was that a flying saucer? Probably not.). However, while this response to low prevalence appears to be strongly engrained in human visual search mechanisms, it may not be as adaptive in socially important, low prevalence tasks like medical screening. While the results of any one study must be interpreted cautiously, these data are consistent with the conclusion that this behavioral response to low prevalence could be a substantial contributor to miss errors in breast cancer screening. PMID:23737980
ERIC Educational Resources Information Center
Pourtois, Gilles; Vocat, Roland; N'Diaye, Karim; Spinelli, Laurent; Seeck, Margitta; Vuilleumier, Patrik
2010-01-01
We studied error monitoring in a human patient with unique implantation of depth electrodes in both the left dorsal cingulate gyrus and medial temporal lobe prior to surgery. The patient performed a speeded go/nogo task and made a substantial number of commission errors (false alarms). As predicted, intracranial Local Field Potentials (iLFPs) in…
Many tests of significance: new methods for controlling type I errors.
Keselman, H J; Miller, Charles W; Holland, Burt
2011-12-01
There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.
Five-year lidar observational results and effects of El Chichon particles on Umkehr ozone data
NASA Astrophysics Data System (ADS)
Uchino, Osamu; Tabata, Isao; Kai, Kenji; Akita, Iwao
1988-08-01
Based on the values of integrated backscattering coefficient B, obtained from the ruby lidar measurements at the Meteorological Research Institude (MRI, at Tsukuba, Japan), the effect of dust particles due to two volcanic eruptions of Mt. El Chichon in 1982 on the Umkehr ozone data at the Tateno Aerological Observatory was determined. In addition, the effects of the aerosols on the Umkehr ozone data at Arosa, Switzerland were investigated using lidar data collected at Garmisch-Partenkirchen, Germany. It was found that both stratospheric and tropospheric aerosols induced a significant negative ozone error in the uppermost layers (33-47 km), caused a small and usually negative ozone error in layers between 16 and 33 km, and induced a significant positive ozone error in layers between 6 and 16 km.
Awareness of deficits and error processing after traumatic brain injury.
Larson, Michael J; Perlstein, William M
2009-10-28
Severe traumatic brain injury is frequently associated with alterations in performance monitoring, including reduced awareness of physical and cognitive deficits. We examined the relationship between awareness of deficits and electrophysiological indices of performance monitoring, including the error-related negativity and posterror positivity (Pe) components of the scalp-recorded event-related potential, in 16 traumatic brain injury survivors who completed a Stroop color-naming task while event-related potential measurements were recorded. Awareness of deficits was measured as the discrepancy between patient and significant-other ratings on the Frontal Systems Behavior Scale. The amplitude of the Pe, but not error-related negativity, was reliably associated with decreased awareness of deficits. Results indicate that Pe amplitude may serve as an electrophysiological indicator of awareness of abilities and deficits.
Sadder and less accurate? False memory for negative material in depression.
Joormann, Jutta; Teachman, Bethany A; Gotlib, Ian H
2009-05-01
Previous research has demonstrated that induced sad mood is associated with increased accuracy of recall in certain memory tasks; the effects of clinical depression, however, are likely to be quite different. The authors used the Deese-Roediger-McDermott paradigm to examine the impact of clinical depression on erroneous recall of neutral and/or emotional stimuli. Specifically, they presented Deese-Roediger-McDermott lists that were highly associated with negative, neutral, or positive lures and compared participants diagnosed with major depressive disorder and nondepressed control participants on the accuracy of their recall of presented material and their false recall of never-presented lures. Compared with control participants, major depressive disorder participants recalled fewer words that had been previously presented but were more likely to falsely recall negative lures; there were no differences between major depressive disorder and control participants in false recall of positive or neutral lures. These findings indicate that depression is associated with false memories of negative material.
Rahal, M; Kervaire, B; Villard, J; Tiercy, J-M
2008-03-01
Human leukocyte antigen (HLA) typing by polymerase chain reaction-sequence-specific oligonucleotide (PCR-SSO) hybridization on solid phase (microbead assay) or polymerase chain reaction-sequence-specific primers (PCR-SSP) requires interpretation softwares to detect all possible allele combinations. These programs propose allele calls by taking into account false-positive or false-negative signal(s). The laboratory has the option to validate typing results in the presence of strongly cross-reacting or apparent false-negative signals. Alternatively, these seemingly aberrant signals may disclose novel variants. We report here four new HLA-B (B*5620 and B*5716) and HLA-DRB1 alleles (DRB1*110107 and DRB1*1474) that were detected by apparent false-negative or -positive hybridization or amplification patterns, and ultimately resolved by sequencing. To avoid allele misassignments, a comprehensive evaluation of acquired data as documented in a quality assurance system is therefore required to confirm unambiguous typing interpretation.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid
2018-04-01
Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
27 CFR 46.245 - Errors in records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Errors in records. 46.245 Section 46.245 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY (CONTINUED) TOBACCO MISCELLANEOUS REGULATIONS RELATING TO TOBACCO PRODUCTS AND...
27 CFR 46.245 - Errors in records.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2011-04-01 2011-04-01 false Errors in records. 46.245 Section 46.245 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY (CONTINUED) TOBACCO MISCELLANEOUS REGULATIONS RELATING TO TOBACCO PRODUCTS AND...
Dynamic Neural Correlates of Motor Error Monitoring and Adaptation during Trial-to-Trial Learning
Tan, Huiling; Jenkinson, Ned
2014-01-01
A basic EEG feature upon voluntary movements in healthy human subjects is a β (13–30 Hz) band desynchronization followed by a postmovement event-related synchronization (ERS) over contralateral sensorimotor cortex. The functional implications of these changes remain unclear. We hypothesized that, because β ERS follows movement, it may reflect the degree of error in that movement, and the salience of that error to the task at hand. As such, the signal might underpin trial-to-trial modifications of the internal model that informs future movements. To test this hypothesis, EEG was recorded in healthy subjects while they moved a joystick-controlled cursor to visual targets on a computer screen, with different rotational perturbations applied between the joystick and cursor. We observed consistently lower β ERS in trials with large error, even when other possible motor confounds, such as reaction time, movement duration, and path length, were controlled, regardless of whether the perturbation was random or constant. There was a negative trial-to-trial correlation between the size of the absolute initial angular error and the amplitude of the β ERS, and this negative correlation was enhanced when other contextual information about the behavioral salience of the angular error, namely, the bias and variance of errors in previous trials, was additionally considered. These same features also had an impact on the behavioral performance. The findings suggest that the β ERS reflects neural processes that evaluate motor error and do so in the context of the prior history of errors. PMID:24741058
Lindström, Björn R; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas
2013-01-01
Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful.
Lindström, Björn R.; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas
2013-01-01
Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful. PMID:23840356
Kaur, Gurvinder; Koshy, Jacob; Thomas, Satish; Kapoor, Harpreet; Zachariah, Jiju George; Bedi, Sahiba
2016-04-01
Early detection and treatment of vision problems in children is imperative to meet the challenges of childhood blindness. Considering the problems of inequitable distribution of trained manpower and limited access of quality eye care services to majority of our population, innovative community based strategies like 'Teachers training in vision screening' need to be developed for effective utilization of the available human resources. To evaluate the effectiveness of introducing teachers as the first level vision screeners. Teacher training programs were conducted for school teachers to educate them about childhood ocular disorders and the importance of their early detection. Teachers from government and semi-government schools located in Ludhiana were given training in vision screening. These teachers then conducted vision screening of children in their schools. Subsequently an ophthalmology team visited these schools for re-evaluation of children identified with low vision. Refraction was performed for all children identified with refractive errors and spectacles were prescribed. Children requiring further evaluation were referred to the base hospital. The project was done in two phases. True positives, false positives, true negatives and false negatives were calculated for evaluation. In phase 1, teachers from 166 schools underwent training in vision screening. The teachers screened 30,205 children and reported eye problems in 4523 (14.97%) children. Subsequently, the ophthalmology team examined 4150 children and confirmed eye problems in 2137 children. Thus, the teachers were able to correctly identify eye problems (true positives) in 47.25% children. Also, only 13.69% children had to be examined by the ophthalmology team, thus reducing their work load. Similarly, in phase 2, 46.22% children were correctly identified to have eye problems (true positives) by the teachers. By random sampling, 95.65% children were correctly identified as normal (true negatives) by the teachers. Considering the high true negative rates and reasonably good true positive rates and the wider coverage provided by the program, vision screening in schools by teachers is an effective method of identifying children with low vision. This strategy is also valuable in reducing the workload of the eye care staff.
Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta
2010-09-01
The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Boland, Julie E; Queen, Robin
2016-01-01
The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read 'email responses' to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it's/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos.
2016-01-01
The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read ‘email responses’ to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it’s/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos. PMID:26959823
Invited Commentary: Beware the Test-Negative Design.
Westreich, Daniel; Hudgens, Michael G
2016-09-01
In this issue of the Journal, Sullivan et al. (Am J Epidemiol. 2016;184(5):345-353) carefully examine the theoretical justification for use of the test-negative design, a common observational study design, in assessing the effectiveness of influenza vaccination. Using modern causal inference methods (in particular, directed acyclic graphs), they describe different threats to the validity of inferences drawn about the effect of vaccination from test-negative design studies. These threats include confounding, selection bias, and measurement error in either the exposure or the outcome. While confounding and measurement error are common in observational studies, the potential for selection bias inherent in the test-negative design brings into question the validity of inferences drawn from such studies. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dehon, Hedwige; Larøi, Frank; Van der Linden, Martial
2010-10-01
This study examined the influence of emotional valence on the production of DRM false memories (Roediger & McDermott, 1995). Participants were presented with neutral, positive, or negative DRM lists for a later recognition (Experiment 1) or recall (Experiment 2) test. In both experiments, confidence and recollective experience (i.e., "Remember-Know" judgments; Tulving, 1985) were also assessed. Results consistently showed that, compared with neutral lists, affective lists induced more false recognition and recall of nonpresented critical lures. Moreover, although confidence ratings did not differ between the false remembering from the different kinds of lists, "Remember" responses were more often associated with negative than positive and neutral false remembering of the critical lures. In contrast, positive false remembering of the critical lures was more often associated with "Know" responses. These results are discussed in light of the Paradoxical Negative Emotion (PNE) hypothesis (Porter, Taylor, & ten Bricke, 2008). (PsycINFO Database Record (c) 2010 APA, all rights reserved).
The role of attention at retrieval on the false recognition of negative emotional DRM lists.
Shah, Datin; Knott, Lauren M
2018-02-01
This study examined the role of attention at retrieval on the false recognition of emotional items using the Deese-Roediger-McDermott (DRM) paradigm. Previous research has shown that divided attention at test increases false remember judgements for neutral critical lures. However, no research has yet directly assessed emotional false memories when attention is manipulated at retrieval. To examine this, participants studied negative (low in valence and high in arousal) and neutral DRM lists and completed recognition tests under conditions of full and divided attention. Results revealed that divided attention at retrieval increased false remember judgements for all critical lures compared to retrieval under full attention, but in both retrieval conditions, false memories were greater for negative compared to neutral stimuli. We believe that this is due to reliance on a more easily accessible (meaning of the word) but less diagnostic form of source monitoring, amplified under conditions of divided attention.
Performance Monitoring Applied to System Supervision
Somon, Bertille; Campagne, Aurélie; Delorme, Arnaud; Berberian, Bruno
2017-01-01
Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL) performance problem. PMID:28744209
Wang, Yan; Yang, Lixia; Wang, Yan
2014-01-01
Past event-related potentials (ERPs) research shows that, after exerting effortful emotion inhibition, the neural correlates of performance monitoring (e.g. error-related negativity) were weakened. An undetermined issue is whether all forms of emotion regulation uniformly impair later performance monitoring. The present study compared the cognitive consequences of two emotion regulation strategies, namely suppression and reappraisal. Participants were instructed to suppress their emotions while watching a sad movie, or to adopt a neutral and objective attitude toward the movie, or to just watch the movie carefully. Then after a mood scale, all participants completed an ostensibly unrelated Stroop task, during which ERPs (i.e. error-related negativity (ERN), post-error positivity (Pe) and N450) were obtained. Reappraisal group successfully decreased their sad emotion, relative to the other two groups. Compared with participants in the control group and the reappraisal group, those who suppressed their emotions during the sad movie showed reduced ERN after error commission. Participants in the suppression group also made more errors in incongruent Stroop trials than the other two groups. There were no significant main effects or interactions of group for reaction time, Pe and N450. Results suggest that reappraisal is both more effective and less resource-depleting than suppression. PMID:24777113
Wang, Yan; Yang, Lixia; Wang, Yan
2014-01-01
Past event-related potentials (ERPs) research shows that, after exerting effortful emotion inhibition, the neural correlates of performance monitoring (e.g. error-related negativity) were weakened. An undetermined issue is whether all forms of emotion regulation uniformly impair later performance monitoring. The present study compared the cognitive consequences of two emotion regulation strategies, namely suppression and reappraisal. Participants were instructed to suppress their emotions while watching a sad movie, or to adopt a neutral and objective attitude toward the movie, or to just watch the movie carefully. Then after a mood scale, all participants completed an ostensibly unrelated Stroop task, during which ERPs (i.e. error-related negativity (ERN), post-error positivity (Pe) and N450) were obtained. Reappraisal group successfully decreased their sad emotion, relative to the other two groups. Compared with participants in the control group and the reappraisal group, those who suppressed their emotions during the sad movie showed reduced ERN after error commission. Participants in the suppression group also made more errors in incongruent Stroop trials than the other two groups. There were no significant main effects or interactions of group for reaction time, Pe and N450. Results suggest that reappraisal is both more effective and less resource-depleting than suppression.
Brain State Before Error Making in Young Patients With Mild Spastic Cerebral Palsy.
Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J
2015-10-01
In the present experiment, children with mild spastic cerebral palsy and a control group carried out a memory recognition task. The key question was if errors of the patient group are foreshadowed by attention lapses, by weak motor preparation, or by both. Reaction times together with event-related potentials associated with motor preparation (frontal late contingent negative variation), attention (parietal P300), and response evaluation (parietal error-preceding positivity) were investigated in instances where 3 subsequent correct trials preceded an error. The findings indicated that error responses of the patient group are foreshadowed by weak motor preparation in correct trials directly preceding an error. © The Author(s) 2015.
Computation and measurement of cell decision making errors using single cell data
Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali
2017-01-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950
Computation and measurement of cell decision making errors using single cell data.
Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali
2017-04-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.
Rivas, Ariel L.; Leitner, Gabriel; Jankowski, Mark D.; ...
2017-05-31
Evolution has conserved “economic” systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. In order to achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. Furthermore, the literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions—including the use of arrows that connect pairs ofmore » consecutive observations—non-reductionist (spatial–temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo, multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information.« less
Rivas, Ariel L; Leitner, Gabriel; Jankowski, Mark D; Hoogesteijn, Almira L; Iandiorio, Michelle J; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Blum, Shlomo E; Piccinini, Renata; Antoniades, Athos; Fazio, Jane C; Apidianakis, Yiorgos; Fair, Jeanne M; Van Regenmortel, Marc H V
2017-01-01
Evolution has conserved "economic" systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. To achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. The literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions-including the use of arrows that connect pairs of consecutive observations-non-reductionist (spatial-temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo , multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information.
NASA Astrophysics Data System (ADS)
Elze, Tobias; Baniasadi, Neda; Jin, Qingying; Wang, Hui; Wang, Mengyu
2017-12-01
Retinal nerve fiber layer thickness (RNFLT) measured by optical coherence tomography (OCT) is widely used in clinical practice to support glaucoma diagnosis. Clinicians frequently interpret peripapillary RNFLT areas marked as abnormal by OCT machines. However, presently, clinical OCT machines do not take individual retinal anatomy variation into account, and according diagnostic biases have been shown particularly for patients with ametropia. The angle between the two major temporal retinal arteries (interartery angle, IAA) is considered a fundamental retinal ametropia marker. Here, we analyze peripapillary spectral domain OCT RNFLT scans of 691 glaucoma patients and apply multivariate logistic regression to quantitatively compare the diagnostic bias of spherical equivalent (SE) of refractive error and IAA and to identify the precise retinal locations of false-positive/negative abnormality marks. Independent of glaucoma severity (visual field mean deviation), IAA/SE variations biased abnormality marks on OCT RNFLT printouts at 36.7%/22.9% of the peripapillary area, respectively. 17.2% of the biases due to SE are not explained by IAA variation, particularly in inferonasal areas. To conclude, the inclusion of SE and IAA in OCT RNFLT norms would help to increase diagnostic accuracy. Our detailed location maps may help clinicians to reduce diagnostic bias while interpreting retinal OCT scans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivas, Ariel L.; Leitner, Gabriel; Jankowski, Mark D.
Evolution has conserved “economic” systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. In order to achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. Furthermore, the literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions—including the use of arrows that connect pairs ofmore » consecutive observations—non-reductionist (spatial–temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo, multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information.« less
Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray
2009-01-01
Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic “two-state” experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error. PMID:19337574
Rivas, Ariel L.; Leitner, Gabriel; Jankowski, Mark D.; Hoogesteijn, Almira L.; Iandiorio, Michelle J.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Blum, Shlomo E.; Piccinini, Renata; Antoniades, Athos; Fazio, Jane C.; Apidianakis, Yiorgos; Fair, Jeanne M.; Van Regenmortel, Marc H. V.
2017-01-01
Evolution has conserved “economic” systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. To achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. The literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions—including the use of arrows that connect pairs of consecutive observations—non-reductionist (spatial–temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo, multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information. PMID:28620378
Byrne, M D; Jordan, T R; Welle, T
2013-01-01
The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.
Extreme Algal Bloom Detection with MERIS
NASA Astrophysics Data System (ADS)
Amin, R.; Gilerson, A.; Gould, R.; Arnone, R.; Ahmed, S.
2009-05-01
Harmful Algal Blooms (HAB's) are a major concern all over the world due to their negative impacts on the marine environment, human health, and the economy. Their detection from space still remains a challenge particularly in turbid coastal waters. In this study we propose a simple reflectance band difference approach for use with Medium Resolution Imaging Spectrometer (MERIS) data to detect intense plankton blooms. For convenience we label this approach as the Extreme Bloom Index (EBI) which is defined as EBI = Rrs (709) - Rrs (665). Our initial analysis shows that this band difference approach has some advantages over the band ratio approaches, particularly in reducing errors due to imperfect atmospheric corrections. We also do a comparison between the proposed EBI technique and the Maximum Chlorophyll Index (MCI) Gower technique. Our preliminary result shows that both the EBI and MCI indeces detect intense plankton blooms, however, MCI is more vulnerable in highly scattering waters, giving more positive false alarms than EBI.
Real-time Bayesian anomaly detection in streaming environmental data
NASA Astrophysics Data System (ADS)
Hill, David J.; Minsker, Barbara S.; Amir, Eyal
2009-04-01
With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.
Kufa, Tendesayi; Lane, Tim; Manyuchi, Albert; Singh, Beverley; Isdahl, Zachary; Osmand, Thomas; Grasso, Mike; Struthers, Helen; McIntyre, James; Chipeta, Zawadi; Puren, Adrian
2017-01-01
Abstract We describe the accuracy of serial rapid HIV testing among men who have sex with men (MSM) in South Africa and discuss the implications for HIV testing and prevention. This was a cross-sectional survey conducted at five stand-alone facilities from five provinces. Demographic, behavioral, and clinical data were collected. Dried blood spots were obtained for HIV-related testing. Participants were offered rapid HIV testing using 2 rapid diagnostic tests (RDTs) in series. In the laboratory, reference HIV testing was conducted using a third-generation enzyme immunoassay (EIA) and a fourth-generation EIA as confirmatory. Accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive, and false-negative rates were determined. Between August 2015 and July 2016, 2503 participants were enrolled. Of these, 2343 were tested by RDT on site with a further 2137 (91.2%) having definitive results on both RDT and EIA. Sensitivity, specificity, positive predictive value, negative predictive value, false-positive rates, and false-negative rates were 92.6% [95% confidence interval (95% CI) 89.6–94.8], 99.4% (95% CI 98.9–99.7), 97.4% (95% CI 95.2–98.6), 98.3% (95% CI 97.6–98.8), 0.6% (95% CI 0.3–1.1), and 7.4% (95% CI 5.2–10.4), respectively. False negatives were similar to true positives with respect to virological profiles. Overall accuracy of the RDT algorithm was high, but sensitivity was lower than expected. Post-HIV test counseling should include discussions of possible false-negative results and the need for retesting among HIV negatives. PMID:28700474
Chancey, Eric T; Bliss, James P; Yamani, Yusuke; Handley, Holly A H
2017-05-01
This study provides a theoretical link between trust and the compliance-reliance paradigm. We propose that for trust mediation to occur, the operator must be presented with a salient choice, and there must be an element of risk for dependence. Research suggests that false alarms and misses affect dependence via two independent processes, hypothesized as trust in signals and trust in nonsignals. These two trust types manifest in categorically different behaviors: compliance and reliance. Eighty-eight participants completed a primary flight task and a secondary signaling system task. Participants evaluated their trust according to the informational bases of trust: performance, process, and purpose. Participants were in a high- or low-risk group. Signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. False-alarm rate affected compliance but not reliance. Miss rate affected reliance but not compliance. Mediation analyses indicated that trust mediated the relationship between false-alarm rate and compliance. Bayesian mediation analyses favored evidence indicating trust did not mediate miss rate and reliance. Conditional indirect effects indicated that factors of trust mediated the relationship between false-alarm rate and compliance (i.e., purpose) and reliance (i.e., process) but only in the high-risk group. The compliance-reliance paradigm is not the reflection of two types of trust. This research could be used to update training and design recommendations that are based upon the assumption that trust causes operator responses regardless of error bias.
False positive acetaminophen concentrations in patients with liver injury.
Polson, Julie; Wians, Frank H; Orsulak, Paul; Fuller, Dwain; Murray, Natalie G; Koff, Jonathan M; Khan, Adil I; Balko, Jody A; Hynan, Linda S; Lee, William M
2008-05-01
Acetaminophen toxicity is the most common form of acute liver failure in the U.S. After acetaminophen overdoses, quantitation of plasma acetaminophen can aid in predicting severity of injury. However, recent case reports have suggested that acetaminophen concentrations may be falsely increased in the presence of hyperbilirubinemia. We tested sera obtained from 43 patients with acute liver failure, mostly unrelated to acetaminophen, utilizing 6 different acetaminophen quantitation systems to determine the significance of this effect. In 36 of the 43 samples with bilirubin concentrations ranging from 1.0-61.5 mg/dl no acetaminophen was detectable by gas chromatography-mass spectroscopy. These 36 samples were then utilized to test the performance characteristics of 2 immunoassay and 4 enzymatic-colorimetric methods. Three of four colorimetric methods demonstrated 'detectable' values for acetaminophen in from 4 to 27 of the 36 negative samples, low concentration positive values being observed when serum bilirubin concentrations exceeded 10 mg/dl. By contrast, the 2 immunoassay methods (EMIT, FPIA) were virtually unaffected. The false positive values obtained were, in general, proportional to the quantity of bilirubin in the sample. However, prepared samples of normal human serum with added bilirubin showed a dose-response curve for only one of the 4 colorimetric assays. False positive acetaminophen tests may result when enzymatic-colorimetric assays are used, most commonly with bilirubin concentrations >10 mg/dl, leading to potential clinical errors in this setting. Bilirubin (or possibly other substances in acute liver failure sera) appears to affect the reliable measurement of acetaminophen, particularly with enzymatic-colorimetric assays.
Cross-Reactivity of Pantoprazole with Three Commercial Cannabinoids Immunoassays in Urine.
Gomila, Isabel; Barceló, Bernardino; Rosell, Antonio; Avella, Sonia; Sahuquillo, Laura; Dastis, Macarena
2017-11-01
Pantoprazole is a frequently prescribed proton pump inhibitor (PPI) commonly utilized in the management of gastrointestinal symptoms. Few substances have proved to cause a false-positive cannabinoid urine screen. However, a case of false-positive urine cannabinoid screen in a patient who received a pantoprazole dose has been recently published. The purpose of this study was to determine the potential cross-reactivity of pantoprazole in the cannabinoid immunoassays: Alere Triage® TOX Drug Screen, KIMS® Cannabinoids II and DRI® Cannabinoids Assay. Drug-free urine to which pantoprazole was added up to 12,000 μg/mL produced negative results in the DRI® Cannabinoids and KIMS® Cannabinoids II. Alere Triage® TOX Drug Screen assay gave positive results at pantoprazole concentrations higher than 1,000 μg/mL. Urine samples from 8 pediatric patients were collected at the beginning of their pantoprazole treatment. Alere Triage® TOX Drug Screen assay produced positive test results in all patient samples and KIMS® Cannabinoids II immunoassay produced positive test results in one patient sample. None patient sample gave a false-positive result when analyzed by the DRI® Cannabinoids Assay. Our findings demonstrate that some cannabinoids immunoassays are susceptible to cross-reaction errors resulting from the presence in urine of pantoprazole and the resulting metabolism of the parent drug. Clinicians should be aware of the possibility of false-positive results for cannabinoids after a pantoprazole treatment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Format for Assignment of Errors and Brief on Behalf of Accused (§ 150.15) B Appendix B to Part 150 National Defense Department of Defense OFFICE OF... OF PRACTICE AND PROCEDURE Pt. 150, App. B Appendix B to Part 150—Format for Assignment of Errors and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Format for Assignment of Errors and Brief on Behalf of Accused (§ 150.15) B Appendix B to Part 150 National Defense Department of Defense OFFICE OF... OF PRACTICE AND PROCEDURE Pt. 150, App. B Appendix B to Part 150—Format for Assignment of Errors and...
ERIC Educational Resources Information Center
WARREN, J. W.
MANY IDEAS TAUGHT IN ELEMENTARY PHYSICS TODAY ARE EITHER FALSE IN FACT OR ABSURD IN LOGIC, AND HAVING BEEN CARRIED ALONG BY TRADITIONAL PRACTICE, THESE ERRORS AND MISCONCEPTIONS CONTINUE TO BE PROMULGATED. MANY MISCONCEPTIONS AND ERRORS COMMONLY FOUND IN CURRENT TEXTBOOKS ARE EXAMINED. AREAS DEALT WITH ARE (1) FORCES, (2) GRAVITATION, (3) ENERGY,…
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
27 CFR 478.48 - Correction of error on license.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 3 2013-04-01 2013-04-01 false Correction of error on license. 478.48 Section 478.48 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION...
27 CFR 478.48 - Correction of error on license.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 3 2010-04-01 2010-04-01 false Correction of error on license. 478.48 Section 478.48 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION...
27 CFR 478.48 - Correction of error on license.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 27 Alcohol, Tobacco Products and Firearms 3 2014-04-01 2014-04-01 false Correction of error on license. 478.48 Section 478.48 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION...
32 CFR 150.15 - Assignments of error and briefs.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 1 2012-07-01 2012-07-01 false Assignments of error and briefs. 150.15 Section 150.15 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE REGULATIONS..., double-spaced on white paper, and securely fastened at the top. All references to matters contained in...
32 CFR 150.15 - Assignments of error and briefs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Assignments of error and briefs. 150.15 Section 150.15 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE REGULATIONS..., double-spaced on white paper, and securely fastened at the top. All references to matters contained in...
32 CFR 150.15 - Assignments of error and briefs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Assignments of error and briefs. 150.15 Section 150.15 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE REGULATIONS..., double-spaced on white paper, and securely fastened at the top. All references to matters contained in...
32 CFR 150.15 - Assignments of error and briefs.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Assignments of error and briefs. 150.15 Section 150.15 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE REGULATIONS..., double-spaced on white paper, and securely fastened at the top. All references to matters contained in...
5 CFR 894.105 - Who may correct an error in my enrollment?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Who may correct an error in my enrollment? 894.105 Section 894.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES DENTAL AND VISION INSURANCE PROGRAM Administration and...
45 CFR 60.6 - Reporting errors, omissions, and revisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Reporting errors, omissions, and revisions. 60.6 Section 60.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION NATIONAL PRACTITIONER DATA BANK FOR ADVERSE INFORMATION ON PHYSICIANS AND OTHER HEALTH CARE PRACTITIONERS Reporting of...
5 CFR 894.105 - Who may correct an error in my enrollment?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Who may correct an error in my enrollment? 894.105 Section 894.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES DENTAL AND VISION INSURANCE PROGRAM Administration and...
5 CFR 894.105 - Who may correct an error in my enrollment?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Who may correct an error in my enrollment? 894.105 Section 894.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES DENTAL AND VISION INSURANCE PROGRAM Administration and...
Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model
Mangels, Jennifer A.; Butterfield, Brady; Lamb, Justin; Good, Catherine; Dweck, Carol S.
2006-01-01
Students’ beliefs and goals can powerfully influence their learning success. Those who believe intelligence is a fixed entity (entity theorists) tend to emphasize ‘performance goals,’ leaving them vulnerable to negative feedback and likely to disengage from challenging learning opportunities. In contrast, students who believe intelligence is malleable (incremental theorists) tend to emphasize ‘learning goals’ and rebound better from occasional failures. Guided by cognitive neuroscience models of top–down, goal-directed behavior, we use event-related potentials (ERPs) to understand how these beliefs influence attention to information associated with successful error correction. Focusing on waveforms associated with conflict detection and error correction in a test of general knowledge, we found evidence indicating that entity theorists oriented differently toward negative performance feedback, as indicated by an enhanced anterior frontal P3 that was also positively correlated with concerns about proving ability relative to others. Yet, following negative feedback, entity theorists demonstrated less sustained memory-related activity (left temporal negativity) to corrective information, suggesting reduced effortful conceptual encoding of this material–a strategic approach that may have contributed to their reduced error correction on a subsequent surprise retest. These results suggest that beliefs can influence learning success through top–down biasing of attention and conceptual processing toward goal-congruent information. PMID:17392928
Neural correlates of performance monitoring in chronic cannabis users and cannabis-naïve controls
Fridberg, Daniel J; Skosnik, Patrick D; Hetrick, William P; O’Donnell, Brian F
2014-01-01
Chronic cannabis use is associated with residual negative effects on measures of executive functioning. However, little previous work has focused specifically on executive processes involved in performance monitoring in frequent cannabis users. The present study investigated event-related potential (ERP) correlates of performance monitoring in chronic cannabis users. The error-related negativity (ERN) and error positivity (Pe), ERPs sensitive to performance monitoring, were recorded from 30 frequent cannabis users (mean usage=5.52 days/week) and 32 cannabis-naïve control participants during a speeded stimulus discrimination task. The “oddball” P3 ERP was recorded as well. Users and controls did not differ on the amplitude or latency of the ERN; however, Pe amplitude was larger among users. Users also showed increased amplitude and reduced latency of the P3 in response to infrequent stimuli presented during the task. Among users, urinary cannabinoid metabolite levels at testing were unrelated to ERP outcomes. However, total years of cannabis use correlated negatively with P3 latency and positively with P3 amplitude, and age of first cannabis use correlated negatively with P3 amplitude. The results of this study suggest that chronic cannabis use is associated with alterations in neural activity related to the processing of motivationally-relevant stimuli (P3) and errors (Pe). PMID:23427191
Emotions and false memories: valence or arousal?
Corson, Yves; Verrier, Nadège
2007-03-01
The effects of mood on false memories have not been studied systematically until recently. Some results seem to indicate that negative mood may reduce false recall and thus suggest an influence of emotional valence on false memory. The present research tested the effects of both valence and arousal on recall and recognition and indicates that the effect is actually due to arousal. In fact, whether participants' mood is positive, negative, or neutral, false memories are significantly more frequent under conditions of high arousal than under conditions of low arousal.
Griffey, Richard T; Trent, Caleb J; Bavolek, Rebecca A; Keeperman, Jacob B; Sampson, Christopher; Poirier, Robert F
2013-01-01
Failure to detect pregnancy in the emergency department (ED) can have important consequences. Urine human chorionic gonadotropin (uhCG) point-of-care (POC) assays are valued for rapidly detecting early pregnancy with high sensitivity. However, under certain conditions, POC uhCG tests can fail to detect pregnancy. In investigating a series of late first-trimester false-negative pregnancy tests in our ED, a novel and distinct causative phenomenon was recently elucidated in our institution. We discuss uhCG POC tests, review our false-negative rate, and describe mechanisms for false negatives and potential remedies. The false-negative POC uhCG rate is very low, but in the setting of a large volume of tests, the numbers are worth consideration. In positive uhCG POC tests, free and fixed antibodies bind hCG to form a "sandwich"; hCG is present in several variant forms that change in their concentrations at different stages of pregnancy. When in excess, intact hCG can saturate the antibodies, preventing sandwich formation (hook effect phenomenon). Some assays may include an antibody that does not recognize certain variants present in later stages of pregnancy. When this variant is in excess, it can bind one antibody avidly and the other not at all, resulting in a false-negative test (hook-like phenomenon). In both situations, dilution is key to an accurate test. Manufacturers should consider that uhCG tests are routinely used at many stages of pregnancy. Characterizing uhCG variants recognized by their tests and eliminating lot-to-lot variability may help improve uhCG test performance. Clinicians need to be aware of and familiarize themselves with the limitations of the specific type of uhCG POC tests used in their practice, recognizing that under certain circumstances, false-negative tests can occur. Copyright © 2013 Elsevier Inc. All rights reserved.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Effects of depressive disorder on false memory for emotional information.
Yeh, Zai-Ting; Hua, Mau-Sun
2009-01-01
This study explored with a false memory paradigm whether (1) depressed patients revealed more false memories and (2) whether more negative false than positive false recognition existed in subjects with depressive disorders. Thirty-two patients suffering from a major depressive episode (DSM-IV criteria), and 30 age- and education-matched normal control subjects participated in this study. After the presentation of a list of positive, negative, and neutral association items in the learning phase, subjects were asked to give a yes/no response in the recognition phase. They were also asked to rate 81 recognition items with emotional valence scores. The results revealed more negative false memories in the clinical depression group than in the normal control group; however, we did not find more negative false memories than positive ones in patients. When compared with the normal group, a more conservative response criterion for positive items was evident in patient groups. It was also found that when compared with the normal group, the subjects in the depression group perceived the positive items as less positive. On the basis of present results, it is suggested that depressed subjects judged the emotional information with criteria different from normal individuals, and patients' emotional memory intensity is attenuated by their mood.
Tetteh, Ato Kwamena; Agyarko, Edward
2017-01-01
Screening results of 488 pregnant women aged 15-44 years whose blood samples had been tested on-site, using First Response® HIV 1/2, and confirmed with INNO-LIA™ HIV I/II Score were used. Of this total, 178 were reactive (HIV I, 154; HIV II, 2; and HIV I and HIV II, 22). Of the 154 HIV I-reactive samples, 104 were confirmed to be HIV I-positive and 2 were confirmed to be HIV II-positive, while 48 were confirmed to be negative [false positive rate = 17.44% (13.56-21.32)]. The two HIV II samples submitted were confirmed to be negative with the confirmatory test. For the 22 HIV I and HIV II samples, 7 were confirmed to be HIV I-positive and 1 was confirmed to be HIV I- and HIV II-positive, while 14 were confirmed to be negative. Of the 310 nonreactive samples, 6 were confirmed to be HIV I-positive and 1 was confirmed to be HIV II-positive [false negative rate = 5.79% (1.63-8.38)], while 303 were negative. False negative outcomes will remain unconfirmed, with no management options for the client. False negative rate of 5.79% requires attention, as its resultant implications on control of HIV/AIDS could be dire.
Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.
Cole, Sindy; McNally, Gavan P
2009-01-01
Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.
Software platform for managing the classification of error- related potentials of observers
NASA Astrophysics Data System (ADS)
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
ERIC Educational Resources Information Center
Mazur, Elizabeth; Wolchik, Sharlene
Building on prior literature on adults' and children's appraisals of stressors, this study investigated relations among negative and positive appraisal biases, negative divorce events, and children's post-divorce adjustment. Subjects were 79 custodial nonremarried mothers and their children ages 9 to 13 who had experienced parental divorce within…
Kim, Won Hwa; Kim, Hye Jung; Jung, Jin Hyang; Park, Ho Yong; Lee, Jeeyeon; Kim, Wan Wook; Park, Ji Young; Cheon, Hyejin; Lee, So Mi; Cho, Seung Hyun; Shin, Kyung Min; Kim, Gab Chul
2017-11-01
Ultrasonography-guided fine-needle aspiration (US-guided FNA) for axillary lymph nodes (ALNs) is currently used with various techniques for the initial staging of breast cancer and tagging of ALNs. With the implementation of the tattooing of biopsied ALNs, the rate of false-negative results of US-guided FNA for non-palpable and suspicious ALNs and concordance with sentinel lymph nodes were determined by node-to node analyses. A total of 61 patients with breast cancer had negative results for metastasis on US-guided FNA of their non-palpable and suspicious ALNs. The biopsied ALNs were tattooed with an injection of 1-3 mL Charcotrace (Phebra, Lane Cove West, Australia) ink and removed during sentinel lymph node biopsy or axillary dissection. We determined the rate of false-negative results and concordance with the sentinel lymph nodes by a retrospective review of surgical and pathologic findings. The association of false-negative results with clinical and imaging factors was evaluated using logistic regression. Of the 61 ALNs with negative results for US-guided FNA, 13 (21%) had metastases on final pathology. In 56 of 61 ALNs (92%), tattooed ALNs corresponded to the sentinel lymph nodes. Among the 5 patients (8%) without correspondence, 1 patient (2%) had 2 metastatic ALNs of 1 tattooed node and 1 sentinel lymph node. In multivariate analysis, atypical cells on FNA results (odds ratio = 20.7, p = 0.040) was independently associated with false-negative FNA results. False-negative ALNs after US-guided FNA occur at a rate of 21% and most of the tattooed ALNs showed concordance with sentinel lymph nodes. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.