Sherrer, Shanen M.; Taggart, David J.; Pack, Lindsey R.; Malik, Chanchal K.; Basu, Ashis K.; Suo, Zucai
2012-01-01
N- (deoxyguanosin-8-yl)-1-aminopyrene (dGAP) is the predominant nitro polyaromatic hydrocarbon product generated from the air pollutant 1-nitropyrene reacting with DNA. Previous studies have shown that dGAP induces genetic mutations in bacterial and mammalian cells. One potential source of these mutations is the error-prone bypass of dGAP lesions catalyzed by the low-fidelity Y-family DNA polymerases. To provide a comparative analysis of the mutagenic potential of the translesion DNA synthesis (TLS) of dGAP, we employed short oligonucleotide sequencing assays (SOSAs) with the model Y-family DNA polymerase from Sulfolobus solfataricus, DNA Polymerase IV (Dpo4), and the human Y-family DNA polymerases eta (hPolη), kappa (hPolκ), and iota (hPolι). Relative to undamaged DNA, all four enzymes generated far more mutations (base deletions, insertions, and substitutions) with a DNA template containing a site-specifically placed dGAP. Opposite dGAP and at an immediate downstream template position, the most frequent mutations made by the three human enzymes were base deletions and the most frequent base substitutions were dAs for all enzymes. Based on the SOSA data, Dpo4 was the least error-prone Y-family DNA polymerase among the four enzymes during the TLS of dGAP. Among the three human Y-family enzymes, hPolκ made the fewest mutations at all template positions except opposite the lesion site. hPolκ was significantly less error-prone than hPolι and hPolη during the extension of dGAP bypass products. Interestingly, the most frequent mutations created by hPolι at all template positions were base deletions. Although hRev1, the fourth human Y-family enzyme, could not extend dGAP bypass products in our standing start assays, it preferentially incorporated dCTP opposite the bulky lesion. Collectively, these mutagenic profiles suggest that hPolkk and hRev1 are the most suitable human Y-family DNA polymerases to perform TLS of dGAP in humans. PMID:22917544
Improving travel information products via robust estimation techniques : final report, March 2009.
DOT National Transportation Integrated Search
2009-03-01
Traffic-monitoring systems, such as those using loop detectors, are prone to coverage gaps, arising from sensor noise, processing errors and : transmission problems. Such gaps adversely affect the accuracy of Advanced Traveler Information Systems. Th...
Safeguarding the process of drug administration with an emphasis on electronic support tools
Seidling, Hanna M; Lampert, Anette; Lohmann, Kristina; Schiele, Julia T; Send, Alexander J F; Witticke, Diana; Haefeli, Walter E
2013-01-01
Aims The aim of this work is to understand the process of drug administration and identify points in the workflow that resulted in interventions by clinical information systems in order to improve patient safety. Methods To identify a generic way to structure the drug administration process we performed peer-group discussions and supplemented these discussions with a literature search for studies reporting errors in drug administration and strategies for their prevention. Results We concluded that the drug administration process might consist of up to 11 sub-steps, which can be grouped into the four sub-processes of preparation, personalization, application and follow-up. Errors in drug handling and administration are diverse and frequent and in many cases not caused by the patient him/herself, but by family members or nurses. Accordingly, different prevention strategies have been set in place with relatively few approaches involving e-health technology. Conclusions A generic structuring of the administration process and particular error-prone sub-steps may facilitate the allocation of prevention strategies and help to identify research gaps. PMID:24007450
Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-03-01
Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.
The application of Aronson's taxonomy to medication errors in nursing.
Johnson, Maree; Young, Helen
2011-01-01
Medication administration is a frequent nursing activity that is prone to error. In this study of 318 self-reported medication incidents (including near misses), very few resulted in patient harm-7% required intervention or prolonged hospitalization or caused temporary harm. Aronson's classification system provided an excellent framework for analysis of the incidents with a close connection between the type of error and the change strategy to minimize medication incidents. Taking a behavioral approach to medication error classification has provided helpful strategies for nurses such as nurse-call cards on patient lockers when patients are absent and checking of medication sign-off by outgoing and incoming staff at handover.
Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-01-01
Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Gibson, C.; Lamb, B. K.
2017-12-01
Frequent strong gap winds were measured in a deep, steep, wildfire-prone river canyon of central Idaho, USA during July-September 2013. Analysis of archived surface pressure data indicate that the gap wind events were driven by regional scale surface pressure gradients. The events always occurred between 0400 and 1200 LT and typically lasted 3-4 hours. The timing makes these events particularly hazardous for wildland firefighting applications since the morning is typically a period of reduced fire activity and unsuspecting firefighters could be easily endangered by the onset of strong downcanyon winds. The gap wind events were not explicitly forecast by operational numerical weather prediction (NWP) models due to the small spatial scale of the canyon ( 1-2 km wide) compared to the horizontal resolution of operational NWP models (3 km or greater). Custom WRF simulations initialized with NARR data were run at 1 km horizontal resolution to assess whether higher resolution NWP could accurately simulate the observed gap winds. Here, we show that the 1 km WRF simulations captured many of the observed gap wind events, although the strength of the events was underpredicted. We also present evidence from these WRF simulations which suggests that the Salmon River Canyon is near the threshold of WRF-resolvable terrain features when the standard WRF coordinate system and discretization schemes are used. Finally, we show that the strength of the gap wind events can be predicted reasonably well as a function of the surface pressure gradient across the gap, which could be useful in the absence of high-resolution NWP. These are important findings for wildland firefighting applications in narrow gaps where routine forecasts may not provide warning for wind effects induced by high-resolution terrain features.
ERIC Educational Resources Information Center
Tajeddin, Zia; Alemi, Minoo; Pashmforoosh, Roya
2017-01-01
Unlike linguistic fossilization, pragmatic fossilization has received scant attention in fossilization research. To bridge this gap, the present study adopted a typical-error method of fossilization research to identify the most frequent errors in pragmatic routines committed by Persian-speaking learners of L2 English and explore the sources of…
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo
2015-01-01
To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.
Regulation of error-prone translesion synthesis by Spartan/C1orf124
Kim, Myoung Shin; Machida, Yuka; Vashisht, Ajay A.; Wohlschlegel, James A.; Pang, Yuan-Ping; Machida, Yuichi J.
2013-01-01
Translesion synthesis (TLS) employs low fidelity polymerases to replicate past damaged DNA in a potentially error-prone process. Regulatory mechanisms that prevent TLS-associated mutagenesis are unknown; however, our recent studies suggest that the PCNA-binding protein Spartan plays a role in suppression of damage-induced mutagenesis. Here, we show that Spartan negatively regulates error-prone TLS that is dependent on POLD3, the accessory subunit of the replicative DNA polymerase Pol δ. We demonstrate that the putative zinc metalloprotease domain SprT in Spartan directly interacts with POLD3 and contributes to suppression of damage-induced mutagenesis. Depletion of Spartan induces complex formation of POLD3 with Rev1 and the error-prone TLS polymerase Pol ζ, and elevates mutagenesis that relies on POLD3, Rev1 and Pol ζ. These results suggest that Spartan negatively regulates POLD3 function in Rev1/Pol ζ-dependent TLS, revealing a previously unrecognized regulatory step in error-prone TLS. PMID:23254330
Cooperstein, Robert; Young, Morgan
2014-01-01
Upright examination procedures like radiology, thermography, manual muscle testing, and spinal motion palpation may lead to spinal interventions with the patient prone. The reliability and accuracy of mapping upright examination findings to the prone position is unknown. This study had 2 primary goals: (1) investigate how erroneous spine-scapular landmark associations may lead to errors in treating and charting spine levels; and (2) study the interexaminer reliability of a novel method for mapping upright spinal sites to the prone position. Experiment 1 was a thought experiment exploring the consequences of depending on the erroneous landmark association of the inferior scapular tip with the T7 spinous process upright and T6 spinous process prone (relatively recent studies suggest these levels are T8 and T9, respectively). This allowed deduction of targeting and charting errors. In experiment 2, 10 examiners (2 experienced, 8 novice) used an index finger to maintain contact with a mid-thoracic spinous process as each of 2 participants slowly moved from the upright to the prone position. Interexaminer reliability was assessed by computing Intraclass Correlation Coefficient, standard error of the mean, root mean squared error, and the absolute value of the mean difference for each examiner from the 10 examiner mean for each of the 2 participants. The thought experiment suggesting that using the (inaccurate) scapular tip landmark rule would result in a 3 level targeting and charting error when radiological findings are mapped to the prone position. Physical upright exam procedures like motion palpation would result in a 2 level targeting error for intervention, and a 3 level error for charting. The reliability experiment showed examiners accurately maintained contact with the same thoracic spinous process as the participant went from upright to prone, ICC (2,1) = 0.83. As manual therapists, the authors have emphasized how targeting errors may impact upon manual care of the spine. Practitioners in other fields that need to accurately locate spinal levels, such as acupuncture and anesthesiology, would also be expected to draw important conclusions from these findings.
2014-01-01
Background Upright examination procedures like radiology, thermography, manual muscle testing, and spinal motion palpation may lead to spinal interventions with the patient prone. The reliability and accuracy of mapping upright examination findings to the prone position is unknown. This study had 2 primary goals: (1) investigate how erroneous spine-scapular landmark associations may lead to errors in treating and charting spine levels; and (2) study the interexaminer reliability of a novel method for mapping upright spinal sites to the prone position. Methods Experiment 1 was a thought experiment exploring the consequences of depending on the erroneous landmark association of the inferior scapular tip with the T7 spinous process upright and T6 spinous process prone (relatively recent studies suggest these levels are T8 and T9, respectively). This allowed deduction of targeting and charting errors. In experiment 2, 10 examiners (2 experienced, 8 novice) used an index finger to maintain contact with a mid-thoracic spinous process as each of 2 participants slowly moved from the upright to the prone position. Interexaminer reliability was assessed by computing Intraclass Correlation Coefficient, standard error of the mean, root mean squared error, and the absolute value of the mean difference for each examiner from the 10 examiner mean for each of the 2 participants. Results The thought experiment suggesting that using the (inaccurate) scapular tip landmark rule would result in a 3 level targeting and charting error when radiological findings are mapped to the prone position. Physical upright exam procedures like motion palpation would result in a 2 level targeting error for intervention, and a 3 level error for charting. The reliability experiment showed examiners accurately maintained contact with the same thoracic spinous process as the participant went from upright to prone, ICC (2,1) = 0.83. Conclusions As manual therapists, the authors have emphasized how targeting errors may impact upon manual care of the spine. Practitioners in other fields that need to accurately locate spinal levels, such as acupuncture and anesthesiology, would also be expected to draw important conclusions from these findings. PMID:24904747
Designing an algorithm to preserve privacy for medical record linkage with error-prone data.
Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen
2014-01-20
Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
Waks, Zeev; Goldbraich, Esther; Farkash, Ariel; Torresani, Michele; Bertulli, Rossella; Restifo, Nicola; Locatelli, Paolo; Casali, Paolo; Carmeli, Boaz
2013-01-01
Clinical decision support systems (CDSSs) are gaining popularity as tools that assist physicians in optimizing medical care. These systems typically comply with evidence-based medicine and are designed with input from domain experts. Nonetheless, deviations from CDSS recommendations are abundant across a broad spectrum of disorders, raising the question as to why this phenomenon exists. Here, we analyze this gap in adherence to a clinical guidelines-based CDSS by examining the physician treatment decisions for 1329 adult soft tissue sarcoma patients in northern Italy using patient-specific parameters. Dubbing this analysis "CareGap", we find that deviations correlate strongly with certain disease features such as local versus metastatic clinical presentation. We also notice that deviations from the guideline-based CDSS suggestions occur more frequently for patients with shorter survival time. Such observations can direct physicians' attention to distinct patient cohorts that are prone to higher deviation levels from clinical practice guidelines. This illustrates the value of CareGap analysis in assessing quality of care for subsets of patients within a larger pathology.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
A continuous quality improvement project to reduce medication error in the emergency department.
Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts
2013-01-01
Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.
Koch, Sven H; Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future.
Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Objective Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Materials and Methods Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Results Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Discussion Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Conclusion Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future. PMID:22437074
Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data
Pal, Doyel; Chen, Tingting; Khethavath, Praveen
2014-01-01
Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786
Effects of Shame and Guilt on Error Reporting Among Obstetric Clinicians.
Zabari, Mara Lynne; Southern, Nancy L
2018-04-17
To understand how the experiences of shame and guilt, coupled with organizational factors, affect error reporting by obstetric clinicians. Descriptive cross-sectional. A sample of 84 obstetric clinicians from three maternity units in Washington State. In this quantitative inquiry, a variant of the Test of Self-Conscious Affect was used to measure proneness to guilt and shame. In addition, we developed questions to assess attitudes regarding concerns about damaging one's reputation if an error was reported and the choice to keep an error to oneself. Both assessments were analyzed separately and then correlated to identify relationships between constructs. Interviews were used to identify organizational factors that affect error reporting. As a group, mean scores indicated that obstetric clinicians would not choose to keep errors to themselves. However, bivariate correlations showed that proneness to shame was positively correlated to concerns about one's reputation if an error was reported, and proneness to guilt was negatively correlated with keeping errors to oneself. Interview data analysis showed that Past Experience with Responses to Errors, Management and Leadership Styles, Professional Hierarchy, and Relationships With Colleagues were influential factors in error reporting. Although obstetric clinicians want to report errors, their decisions to report are influenced by their proneness to guilt and shame and perceptions of the degree to which organizational factors facilitate or create barriers to restore their self-images. Findings underscore the influence of the organizational context on clinicians' decisions to report errors. Copyright © 2018 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
Clinical errors that can occur in the treatment decision-making process in psychotherapy.
Park, Jake; Goode, Jonathan; Tompkins, Kelley A; Swift, Joshua K
2016-09-01
Clinical errors occur in the psychotherapy decision-making process whenever a less-than-optimal treatment or approach is chosen when working with clients. A less-than-optimal approach may be one that a client is unwilling to try or fully invest in based on his/her expectations and preferences, or one that may have little chance of success based on contraindications and/or limited research support. The doctor knows best and the independent choice models are two decision-making models that are frequently used within psychology, but both are associated with an increased likelihood of errors in the treatment decision-making process. In particular, these models fail to integrate all three components of the definition of evidence-based practice in psychology (American Psychological Association, 2006). In this article we describe both models and provide examples of clinical errors that can occur in each. We then introduce the shared decision-making model as an alternative that is less prone to clinical errors. PsycINFO Database Record (c) 2016 APA, all rights reserved
The Significance of the Record Length in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Senarath, S. U.
2013-12-01
Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.
Seo, Hogyu David; Lee, Daeyoup
2018-05-15
Random mutagenesis of a target gene is commonly used to identify mutations that yield the desired phenotype. Of the methods that may be used to achieve random mutagenesis, error-prone PCR is a convenient and efficient strategy for generating a diverse pool of mutants (i.e., a mutant library). Error-prone PCR is the method of choice when a researcher seeks to mutate a pre-defined region, such as the coding region of a gene while leaving other genomic regions unaffected. After the mutant library is amplified by error-prone PCR, it must be cloned into a suitable plasmid. The size of the library generated by error-prone PCR is constrained by the efficiency of the cloning step. However, in the fission yeast, Schizosaccharomyces pombe, the cloning step can be replaced by the use of a highly efficient one-step fusion PCR to generate constructs for transformation. Mutants of desired phenotypes may then be selected using appropriate reporters. Here, we describe this strategy in detail, taking as an example, a reporter inserted at centromeric heterochromatin.
Evidence for rare capsular switching in Streptococcus agalactiae.
Martins, Elisabete Raquel; Melo-Cristino, José; Ramirez, Mário
2010-03-01
The polysaccharide capsule is a major antigenic factor in Streptococcus agalactiae (Lancefield group B streptococcus [GBS]). Previous observations suggest that exchange of capsular loci is likely to occur rather frequently in GBS, even though GBS is not known to be naturally transformable. We sought to identify and characterize putative capsular switching events, by means of a combination of phenotypic and genotypic methods, including pulsed-field gel electrophoretic profiling, multilocus sequence typing, and surface protein and pilus gene profiling. We show that capsular switching by horizontal gene transfer is not as frequent as previously suggested. Serotyping errors may be the main reason behind the overestimation of capsule switching, since phenotypic techniques are prone to errors of interpretation. The identified putative capsular transformants involved the acquisition of the entire capsular locus and were not restricted to the serotype-specific central genes, the previously suggested main mechanism underlying capsular switching. Our data, while questioning the frequency of capsular switching, provide clear evidence for in vivo capsular transformation in S. agalactiae, which may be of critical importance in planning future vaccination strategies against this pathogen.
Evidence for Rare Capsular Switching in Streptococcus agalactiae▿
Martins, Elisabete Raquel; Melo-Cristino, José; Ramirez, Mário
2010-01-01
The polysaccharide capsule is a major antigenic factor in Streptococcus agalactiae (Lancefield group B streptococcus [GBS]). Previous observations suggest that exchange of capsular loci is likely to occur rather frequently in GBS, even though GBS is not known to be naturally transformable. We sought to identify and characterize putative capsular switching events, by means of a combination of phenotypic and genotypic methods, including pulsed-field gel electrophoretic profiling, multilocus sequence typing, and surface protein and pilus gene profiling. We show that capsular switching by horizontal gene transfer is not as frequent as previously suggested. Serotyping errors may be the main reason behind the overestimation of capsule switching, since phenotypic techniques are prone to errors of interpretation. The identified putative capsular transformants involved the acquisition of the entire capsular locus and were not restricted to the serotype-specific central genes, the previously suggested main mechanism underlying capsular switching. Our data, while questioning the frequency of capsular switching, provide clear evidence for in vivo capsular transformation in S. agalactiae, which may be of critical importance in planning future vaccination strategies against this pathogen. PMID:20023016
Mind the gap: The impact of missing data on the calculation of phytoplankton phenology metrics
NASA Astrophysics Data System (ADS)
Cole, Harriet; Henson, Stephanie; Martin, Adrian; Yool, Andrew
2012-08-01
Annual phytoplankton blooms are key events in marine ecosystems and interannual variability in bloom timing has important implications for carbon export and the marine food web. The degree of match or mismatch between the timing of phytoplankton and zooplankton annual cycles may impact larval survival with knock-on effects at higher trophic levels. Interannual variability in phytoplankton bloom timing may also be used to monitor changes in the pelagic ecosystem that are either naturally or anthropogenically forced. Seasonality metrics that use satellite ocean color data have been developed to quantify the timing of phenological events which allow for objective comparisons between different regions and over long periods of time. However, satellite data sets are subject to frequent gaps due to clouds and atmospheric aerosols, or persistent data gaps in winter due to low sun angle. Here we quantify the impact of these gaps on determining the start and peak timing of phytoplankton blooms. We use the NASA Ocean Biogeochemical Model that assimilates SeaWiFS data as a gap-free time series and derive an empirical relationship between the percentage of missing data and error in the phenology metric. Applied globally, we find that the majority of subpolar regions have typical errors of 30 days for the bloom initiation date and 15 days for the peak date. The errors introduced by intermittent data must be taken into account in phenological studies.
NASA Astrophysics Data System (ADS)
Gomes, Dora Prata; Sequeira, Inês J.; Figueiredo, Carlos; Rueff, José; Brás, Aldina
2016-12-01
Human chromosomal fragile sites (CFSs) are heritable loci or regions of the human chromosomes prone to exhibit gaps, breaks and rearrangements. Determining the frequency of deletions and duplications in CFSs may contribute to explain the occurrence of human disease due to those rearrangements. In this study we analyzed the frequency of deletions and duplications in each human CFS. Statistical methods, namely data display, descriptive statistics and linear regression analysis were applied to analyze this dataset. We found that FRA15C, FRA16A and FRAXB are the most frequently involved CFSs in deletions and duplications occurring in the human genome.
Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.
Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A
2017-08-01
Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.
Seidman, M M; Bredberg, A; Seetharam, S; Kraemer, K H
1987-07-01
Mutagenesis was studied at the DNA-sequence level in human fibroblast and lymphoid cells by use of a shuttle vector plasmid, pZ189, containing a suppressor tRNA marker gene. In a series of experiments, 62 plasmids were recovered that had two to six base substitutions in the 160-base-pair marker gene. Approximately 20-30% of the mutant plasmids that were recovered after passing ultraviolet-treated pZ189 through a repair-proficient human fibroblast line contained these multiple mutations. In contrast, passage of ultraviolet-treated pZ189 through an excision-repair-deficient (xeroderma pigmentosum) line yielded only 2% multiple base substitution mutants. Introducing a single-strand nick in otherwise unmodified pZ189 adjacent to the marker, followed by passage through the xeroderma pigmentosum cells, resulted in about 66% multiple base substitution mutants. The multiple mutations were found in a 160-base-pair region containing the marker gene but were rarely found in an adjacent 170-base-pair region. Passing ultraviolet-treated or nicked pZ189 through a repair-proficient human B-cell line also yielded multiple base substitution mutations in 20-33% of the mutant plasmids. An explanation for these multiple mutations is that they were generated by an error-prone polymerase while filling gaps. These mutations share many of the properties displayed by mutations in the immunoglobulin hypervariable regions.
Validation, Edits, and Application Processing Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
The impact of quality assurance procedures on the correct award of Basic Educational Opportunity Grants (BEOGs) for 1979-1980 was assessed, and a model for detecting error-prone applications early in processing was developed. The Bureau of Student Financial Aid introduced new comments into the edit system in 1979 and expanded the pre-established…
Meiotic Divisions: No Place for Gender Equality.
El Yakoubi, Warif; Wassmann, Katja
2017-01-01
In multicellular organisms the fusion of two gametes with a haploid set of chromosomes leads to the formation of the zygote, the first cell of the embryo. Accurate execution of the meiotic cell division to generate a female and a male gamete is required for the generation of healthy offspring harboring the correct number of chromosomes. Unfortunately, meiosis is error prone. This has severe consequences for fertility and under certain circumstances, health of the offspring. In humans, female meiosis is extremely error prone. In this chapter we will compare male and female meiosis in humans to illustrate why and at which frequency errors occur, and describe how this affects pregnancy outcome and health of the individual. We will first introduce key notions of cell division in meiosis and how they differ from mitosis, followed by a detailed description of the events that are prone to errors during the meiotic divisions.
Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels
NASA Astrophysics Data System (ADS)
Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang
In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.
Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2014-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311
Suzuki, Hirokazu; Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2015-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…
Gole, Markus; Köchel, Angelika; Schäfer, Axel; Schienle, Anne
2012-03-01
The goal of the present study was to investigate a threat engagement, disengagement, and sensitivity bias in individuals suffering from pathological worry. Twenty participants high in worry proneness and 16 control participants low in worry proneness completed an emotional go/no-go task with worry-related threat words and neutral words. Shorter reaction times (i.e., threat engagement bias), smaller omission error rates (i.e., threat sensitivity bias), and larger commission error rates (i.e., threat disengagement bias) emerged only in the high worry group when worry-related words constituted the go-stimuli and neutral words the no-go stimuli. Also, smaller omission error rates as well as larger commission error rates were observed in the high worry group relative to the low worry group when worry-related go stimuli and neutral no-go stimuli were used. The obtained results await further replication within a generalized anxiety disorder sample. Also, further samples should include men as well. Our data suggest that worry-prone individuals are threat-sensitive, engage more rapidly with aversion, and disengage harder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Smailes, David; Meins, Elizabeth; Fernyhough, Charles
2015-01-01
People who experience intrusive thoughts are at increased risk of developing hallucinatory experiences, as are people who have weak reality discrimination skills. No study has yet examined whether these two factors interact to make a person especially prone to hallucinatory experiences. The present study examined this question in a non-clinical sample. Participants were 160 students, who completed a reality discrimination task, as well as self-report measures of cannabis use, negative affect, intrusive thoughts and auditory hallucination-proneness. The possibility of an interaction between reality discrimination performance and level of intrusive thoughts was assessed using multiple regression. The number of reality discrimination errors and level of intrusive thoughts were independent predictors of hallucination-proneness. The reality discrimination errors × intrusive thoughts interaction term was significant, with participants who made many reality discrimination errors and reported high levels of intrusive thoughts being especially prone to hallucinatory experiences. Hallucinatory experiences are more likely to occur in people who report high levels of intrusive thoughts and have weak reality discrimination skills. If applicable to clinical samples, these findings suggest that improving patients' reality discrimination skills and reducing the number of intrusive thoughts they experience may reduce the frequency of hallucinatory experiences.
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Saavedra, Pedro; And Others
Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
Absence of Mutagenic Activity of Hycanthone in Serratia marcescens,
1986-05-29
repair system but is enhanced by the plasmid pKMl01, which mediates the inducible error-prone repair system. Hycanthone, like proflavin , .1...enhanced by the plasmid pKM10, which mediates the inducible error-prone repair system. Hycanthone, like proflavin , intercalates between the stacked bases...Roth (1974) lave suggested that proflavin , which has a planar triple ring structure similar to hycanthone, interacts with DNA, which upon replication
Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori
2013-11-01
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.
ERIC Educational Resources Information Center
Saavedra, Pedro; Kuchak, JoAnn
An error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications was developed, based on interviews conducted with a quality control sample of 1,791 students during 1978-1979. The model was designed to identify corrective methods appropriate for different types of…
Schipler, Agnes; Iliakis, George
2013-09-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice.
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
Somatic immunoglobulin hypermutation
Diaz, Marilyn; Casali, Paolo
2015-01-01
Immunoglobulin hypermutation provides the structural correlate for the affinity maturation of the antibody response. Characteristic modalities of this mechanism include a preponderance of point-mutations with prevalence of transitions over transversions, and the mutational hotspot RGYW sequence. Recent evidence suggests a mechanism whereby DNA-breaks induce error-prone DNA synthesis in immunoglobulin V(D)J regions by error-prone DNA polymerases. The nature of the targeting mechanism and the trans-factors effecting such breaks and their repair remain to be determined. PMID:11869898
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Polμ tumor variants decrease the efficiency and accuracy of NHEJ
Sastre-Moreno, Guillermo; Pryor, John M.; Díaz-Talavera, Alberto; Ruiz, José F.; Ramsden, Dale A.
2017-01-01
Abstract The non homologous end-joining (NHEJ) pathway of double-strand break (DSB) repair often requires DNA synthesis to fill the gaps generated upon alignment of the broken ends, a complex task performed in human cells by two specialized DNA polymerases, Polλ and Polμ. It is now well established that Polμ is the one adapted to repair DSBs with non-complementary ends, the most challenging scenario, although the structural basis and physiological implications of this adaptation are not fully understood. Here, we demonstrate that two human Polμ point mutations, G174S and R175H, previously identified in two different tumor samples and affecting two adjacent residues, limit the efficiency of accurate NHEJ by Polμ in vitro and in vivo. Moreover, we show that this limitation is the consequence of a decreased template dependency during NHEJ, which renders the error-rate of the mutants higher due to the ability of Polμ to randomly incorporate nucleotides at DSBs. These results highlight the relevance of the 8 kDa domain of Polμ for accurate and efficient NHEJ, but also its contribution to the error-prone behavior of Polμ at 2-nt gaps. This work provides the first demonstration that mutations affecting Polμ identified in tumors can alter the efficiency and fidelity of NHEJ. PMID:28973441
Brébion, Gildas; Larøi, Frank; Van der Linden, Martial
2010-10-01
Hallucinations in patients with schizophrenia have been associated with a liberal response bias in signal detection and recognition tasks and with various types of source-memory error. We investigated the associations of hallucination proneness with free-recall intrusions and false recognitions of words in a nonclinical sample. A total of 81 healthy individuals were administered a verbal memory task involving free recall and recognition of one nonorganizable and one semantically organizable list of words. Hallucination proneness was assessed by means of a self-rating scale. Global hallucination proneness was associated with free-recall intrusions in the nonorganizable list and with a response bias reflecting tendency to make false recognitions of nontarget words in both types of list. The verbal hallucination score was associated with more intrusions and with a reduced tendency to make false recognitions of words. The associations between global hallucination proneness and two types of verbal memory error in a nonclinical sample corroborate those observed in patients with schizophrenia and suggest that common cognitive mechanisms underlie hallucinations in psychiatric and nonclinical individuals.
How Alterations in the Cdt1 Expression Lead to Gene Amplification in Breast Cancer
2011-07-01
absence of extrinsic DNA damage. We measured the TLS activity by measuring the mutation frequency in a supF gene (in a shuttle vector) subjected to UV...induced DNA damage before its introduction into the cells. Error-prone TLS activity will mutate the supF gene , which is scored by a blue-white colony...Figure 4A). Sequencing of the mutant supF genes , revealed a mutation spectrum consistent with error prone TLS (Supplemental Table 1). Significantly
Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.
Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian
2010-01-01
The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.
Efficiency and Fidelity of Human DNA Polymerases λ and β during Gap-Filling DNA Synthesis
Brown, Jessica A.; Pack, Lindsey R.; Sanman, Laura E.; Suo, Zucai
2010-01-01
The base excision repair (BER) pathway coordinates the replacement of 1 to 10 nucleotides at sites of single-base lesions. This process generates DNA substrates with various gap sizes which can alter the catalytic efficiency and fidelity of a DNA polymerase during gap-filling DNA synthesis. Here, we quantitatively determined the substrate specificity and base substitution fidelity of human DNA polymerase λ (Pol λ), an enzyme proposed to support the known BER DNA polymerase β (Pol β), as it filled 1- to 10-nucleotide gaps at 1-nucleotide intervals. Pol λ incorporated a correct nucleotide with relatively high efficiency until the gap size exceeded 9 nucleotides. Unlike Pol λ, Pol β did not have an absolute threshold on gap size as the catalytic efficiency for a correct dNTP gradually decreased as the gap size increased from 2 to 10 nucleotides and then recovered for non-gapped DNA. Surprisingly, an increase in gap size resulted in lower polymerase fidelity for Pol λ, and this downregulation of fidelity was controlled by its non-enzymatic N-terminal domains. Overall, Pol λ was up to 160-fold more error-prone than Pol β, thereby suggesting Pol λ would be more mutagenic during long gap-filling DNA synthesis. In addition, dCTP was the preferred misincorporation for Pol λ and its N-terminal domain truncation mutants. This nucleotide preference was shown to be dependent upon the identity of the adjacent 5′-template base. Our results suggested that both Pol λ and Pol β would catalyze nucleotide incorporation with the highest combination of efficiency and accuracy when the DNA substrate contains a single-nucleotide gap. Thus, Pol λ, like Pol β, is better suited to catalyze gap-filling DNA synthesis during short-patch BER in vivo, although, Pol λ may play a role in long-patch BER. PMID:20961817
Improved acid tolerance of Lactobacillus pentosus by error-prone whole genome amplification.
Ye, Lidan; Zhao, Hua; Li, Zhi; Wu, Jin Chuan
2013-05-01
Acid tolerance of Lactobacillus pentosus ATCC 8041 was improved by error-prone amplification of its genomic DNA using random primers and Taq DNA polymerase. The resulting amplification products were transferred into wild-type L. pentosus by electroporation and the transformants were screened for growth on low-pH agar plates. After only one round of mutation, one mutant (MT3) was identified that was able to completely consume 20 g/L of glucose to produce lactic acid at a yield of 95% in 1L MRS medium at pH 3.8 within 36 h, whereas no growth or lactic acid production was observed for the wild-type strain under the same conditions. The acid tolerance of mutant MT3 remained genetically stable for at least 25 subcultures. Therefore, the error-prone whole genome amplification technique is a very powerful tool for improving phenotypes of this lactic acid bacterium and may also be applicable for other microorganisms. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V
2013-01-01
The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Random mutagenesis by error-prone pol plasmid replication in Escherichia coli.
Alexander, David L; Lilly, Joshua; Hernandez, Jaime; Romsdahl, Jillian; Troll, Christopher J; Camps, Manel
2014-01-01
Directed evolution is an approach that mimics natural evolution in the laboratory with the goal of modifying existing enzymatic activities or of generating new ones. The identification of mutants with desired properties involves the generation of genetic diversity coupled with a functional selection or screen. Genetic diversity can be generated using PCR or using in vivo methods such as chemical mutagenesis or error-prone replication of the desired sequence in a mutator strain. In vivo mutagenesis methods facilitate iterative selection because they do not require cloning, but generally produce a low mutation density with mutations not restricted to specific genes or areas within a gene. For this reason, this approach is typically used to generate new biochemical properties when large numbers of mutants can be screened or selected. Here we describe protocols for an advanced in vivo mutagenesis method that is based on error-prone replication of a ColE1 plasmid bearing the gene of interest. Compared to other in vivo mutagenesis methods, this plasmid-targeted approach allows increased mutation loads and facilitates iterative selection approaches. We also describe the mutation spectrum for this mutagenesis methodology in detail, and, using cycle 3 GFP as a target for mutagenesis, we illustrate the phenotypic diversity that can be generated using our method. In sum, error-prone Pol I replication is a mutagenesis method that is ideally suited for the evolution of new biochemical activities when a functional selection is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lief, E
2015-06-15
Purpose: To reduce the skin dose from the carbon fiber couch scatter in radiation treatment of breast cancer in the prone position. If this issue is not addressed, the prone breast touching the solid carbon fiber couch can absorb significant dose to the skin and cause the skin reaction. Methods: 1. Use of “tennis racket” instead of the solid couch. To check this hypothesis, we measured the dose at the depth of 5 mm in solid water phantom placed on the couch, using a Farmer chamber. A plan for a patient with 6MV beams, gantry angles of 113 and 286more » degrees Varian scale was used. It was found that treatment with “tennis racket” instead of the solid carbon fiber couch reduces the surface dose by 5–7%, depending on the beam direction. 2. Use of the air gap between the couch and the body was analyzed using radiochromic film on the surface of the solid water phantom 10 cm thick. Initially the phantom was placed on the couch with the film sandwiched in between. Two fields at the angles of 135 and 315 degrees were used. The measurements were repeated for the air gap of 2 and 5 cm and 6 and 15 MV beams. Results: It was found that a 2-cm gap decreased the surface dose by 3% for a 6 MV beam and by 5.5% for a 15 MV beam. A 5-cm gap reduced the dose by 9% for 6 MV and 13.5% for 15 MV. Conclusion: Use of both methods (combined if possible) can significantly reduce the surface dose in radiation therapy of the prone breast and possible skin reaction. We plan to explore dependence of the dose reduction upon the angle of incidence.« less
WISC-R Examiner Errors: Cause for Concern.
ERIC Educational Resources Information Center
Slate, John R.; Chick, David
1989-01-01
Clinical psychology graduate students (N=14) administered Wechsler Intelligence Scale for Children-Revised. Found numerous scoring and mechanical errors that influenced full-scale intelligence quotient scores on two-thirds of protocols. Particularly prone to error were Verbal subtests of Vocabulary, Comprehension, and Similarities. Noted specific…
Cognitive fallacies and criminal investigations.
Ditrich, Hans
2015-03-01
The human mind is susceptible to inherent fallacies that often hamper fully rational action. Many such misconceptions have an evolutionary background and are thus difficult to avert. Deficits in the reliability of eye-witnesses are well known to legal professionals; however, less attention has been paid to such effects in crime investigators. In order to obtain an "inside view" on the role of cognitive misconceptions in criminalistic work, a list of fallacies from the literature was adapted to criminalistic settings. The statements on this list were rated by highly experienced crime scene investigators according to the assumed likelihood of these errors to appear and their severity of effect. Among others, selective perception, expectation and confirmation bias, anchoring/"pars per toto" errors and "onus probandi"--shifting the burden of proof from the investigator to the suspect--were frequently considered to negatively affect criminal investigations. As a consequence, the following measures are proposed: alerting investigating officers in their training to cognitive fallacies and promoting the exchange of experiences in peer circles of investigators on a regular basis. Furthermore, the improvement of the organizational error culture and the establishment of a failure analysis system in order to identify and alleviate error prone processes are suggested. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Schipler, Agnes; Iliakis, George
2013-01-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice. PMID:23804754
Fabbretti, G
2010-06-01
Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.
Adaptive Constructive Processes and the Future of Memory
ERIC Educational Resources Information Center
Schacter, Daniel L.
2012-01-01
Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Shijun; Yao Jianhua; Liu Jiamin
Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined bymore » the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27{+-}52.97 to 14.98 mm{+-}11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.« less
Chakravarti, D; Mailander, P C; Li, K M; Higginbotham, S; Zhang, H L; Gross, M L; Meza, J L; Cavalieri, E L; Rogan, E G
2001-11-29
Treatment of SENCAR mouse skin with dibenzo[a,l]pyrene results in abundant formation of abasic sites that undergo error-prone excision repair, forming oncogenic H-ras mutations in the early preneoplastic period. To examine whether the abundance of abasic sites causes repair infidelity, we treated SENCAR mouse skin with estradiol-3,4-quinone (E(2)-3,4-Q) and determined adduct levels 1 h after treatment, as well as mutation spectra in the H-ras gene between 6 h and 3 days after treatment. E(2)-3,4-Q formed predominantly (> or =99%) the rapidly-depurinating 4-hydroxy estradiol (4-OHE(2))-1-N3Ade adduct and the slower-depurinating 4-OHE(2)-1-N7Gua adduct. Between 6 h and 3 days, E(2)-3,4-Q induced abundant A to G mutations in H-ras DNA, frequently in the context of a 3'-G residue. Using a T.G-DNA glycosylase (TDG)-PCR assay, we determined that the early A to G mutations (6 and 12 h) were in the form of G.T heteroduplexes, suggesting misrepair at A-specific depurination sites. Since G-specific mutations were infrequent in the spectra, it appears that the slow rate of depurination of the N7Gua adducts during active repair may not generate a threshold level of G-specific abasic sites to affect repair fidelity. These results also suggest that E(2)-3,4-Q, a suspected endogenous carcinogen, is a genotoxic compound and could cause mutations.
Rajeev, K R; Menon, Smrithy S; Beena, K; Holla, Raghavendra; Kumar, R Rajaneesh; Dinesh, M
2014-01-01
A prospective study was undertaken to evaluate the influence of patient positioning on the set up variations to determine the planning target volume (PTV) margins and to evaluate the clinical relevance volume assessment of the small bowel (SB) within the irradiated volume. During the period of months from December 2011 to April 2012, a computed tomography (CT) scan was done either in supine position or in prone position using a belly board (BB) for 20 consecutive patients. All the patients had histologically proven rectal cancer and received either post- or pre-operative pelvic irradiation. Using a three-dimensional planning system, the dose-volume histogram for SB was defined in each axial CT slice. Total dose was 46-50 Gy (2 Gy/fraction), delivered using the 4-field box technique. The set up variation of the study group was assessed from the data received from the electronic portal imaging device in the linear accelerator. The shift along X, Y, and Z directions were noted. Both systematic and random errors were calculated and using both these values the PTV margin was calculated. The systematic errors of patients treated in the supine position were 0.87 (X-mm), 0.66 (Y-mm), 1.6 (Z-mm) and in the prone position were 1.3 (X-mm), 0.59 (Y-mm), 1.17 (Z-mm). The random errors of patients treated in the supine positions were 1.81 (X-mm), 1.73 (Y-mm), 1.83 (Z-mm) and in prone position were 2.02 (X-mm), 1.21 (Y-mm), 3.05 (Z-mm). The calculated PTV margins in the supine position were 3.45 (X-mm), 2.87 (Y-mm), 5.31 (Z-mm) and in the prone position were 4.91 (X-mm), 2.32 (Y-mm), 5.08 (Z-mm). The mean volume of the peritoneal cavity was 648.65 cm 3 in the prone position and 1197.37 cm 3 in the supine position. The prone position using BB device was more effective in reducing irradiated SB volume in rectal cancer patients. There were no significant variations in the daily set up for patients treated in both supine and prone positions.
Smoothing and gap-filling of high resolution multi-spectral time series: Example of Landsat data
NASA Astrophysics Data System (ADS)
Vuolo, Francesco; Ng, Wai-Tim; Atzberger, Clement
2017-05-01
This paper introduces a novel methodology for generating 15-day, smoothed and gap-filled time series of high spatial resolution data. The approach is based on templates from high quality observations to fill data gaps that are subsequently filtered. We tested our method for one large contiguous area (Bavaria, Germany) and for nine smaller test sites in different ecoregions of Europe using Landsat data. Overall, our results match the validation dataset to a high degree of accuracy with a mean absolute error (MAE) of 0.01 for visible bands, 0.03 for near-infrared and 0.02 for short-wave-infrared. Occasionally, the reconstructed time series are affected by artefacts due to undetected clouds. Less frequently, larger uncertainties occur as a result of extended periods of missing data. Reliable cloud masks are highly warranted for making full use of time series.
Thomas A. Spies; David B. Lindenmayer; A. Malcolm Gill; Scott L. Stephens; James K. Agee
2012-01-01
Conserving biodiversity in fire-prone forest ecosystems is challenging for several reasons including differing and incomplete conceptual models of fire-related ecological processes, major gaps in ecological and management knowledge, high variability in fire behavior and ecological responses to fires, altered fire regimes as a result of land-use history and climate...
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Driving improvement in patient care: lessons from Toyota.
Thompson, Debra N; Wolf, Gail A; Spear, Steven J
2003-11-01
Nurses today are attempting to do more with less while grappling with faulty error-prone systems that do not focus on patients at the point of care. This struggle occurs against a backdrop of rising national concern over the incidence of medical errors in healthcare. In an effort to create greater value with scarce resources and fix broken systems that compromise quality care, UPMC Health System is beginning to master and implement the Toyota Production System (TPS)--a method of managing people engaged in work that emphasizes frequent rapid problem solving and work redesign that has become the global archetype for productivity and performance. The authors discuss the rationale for applying TPS to healthcare and implementation of the system through the development of "learning unit" model lines and initial outcomes, such as dramatic reductions in the number of missing medications and thousands of hours and dollars saved as a result of TPS-driven changes. Tracking data further suggest that TPS, with sufficient staff preparation and involvement, has the potential for continuous, lasting, and accelerated improvement in patient care.
One-step random mutagenesis by error-prone rolling circle amplification
Fujii, Ryota; Kitaoka, Motomitsu; Hayashi, Kiyoshi
2004-01-01
In vitro random mutagenesis is a powerful tool for altering properties of enzymes. We describe here a novel random mutagenesis method using rolling circle amplification, named error-prone RCA. This method consists of only one DNA amplification step followed by transformation of the host strain, without treatment with any restriction enzymes or DNA ligases, and results in a randomly mutated plasmid library with 3–4 mutations per kilobase. Specific primers or special equipment, such as a thermal-cycler, are not required. This method permits rapid preparation of randomly mutated plasmid libraries, enabling random mutagenesis to become a more commonly used technique. PMID:15507684
Medication administration errors in nursing homes using an automated medication dispensing system.
van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.
Producing good font attribute determination using error-prone information
NASA Astrophysics Data System (ADS)
Cooperman, Robert
1997-04-01
A method to provide estimates of font attributes in an OCR system, using detectors of individual attributes that are error-prone. For an OCR system to preserve the appearance of a scanned document, it needs accurate detection of font attributes. However, OCR environments have noise and other sources of errors, tending to make font attribute detection unreliable. Certain assumptions about font use can greatly enhance accuracy. Attributes such as boldness and italics are more likely to change between neighboring words, while attributes such as serifness are less likely to change within the same paragraph. Furthermore, the document as a whole, tends to have a limited number of sets of font attributes. These assumptions allow a better use of context than the raw data, or what would be achieved by simpler methods that would oversmooth the data.
A defect in homologous recombination leads to increased translesion synthesis in E. coli
Naiman, Karel; Pagès, Vincent; Fuchs, Robert P.
2016-01-01
DNA damage tolerance pathways allow cells to duplicate their genomes despite the presence of replication blocking lesions. Cells possess two major tolerance strategies, namely translesion synthesis (TLS) and homology directed gap repair (HDGR). TLS pathways involve specialized DNA polymerases that are able to synthesize past DNA lesions with an intrinsic risk of causing point mutations. In contrast, HDGR pathways are essentially error-free as they rely on the recovery of missing information from the sister chromatid by RecA-mediated homologous recombination. We have investigated the genetic control of pathway choice between TLS and HDGR in vivo in Escherichia coli. In a strain with wild type RecA activity, the extent of TLS across replication blocking lesions is generally low while HDGR is used extensively. Interestingly, recA alleles that are partially impaired in D-loop formation confer a decrease in HDGR and a concomitant increase in TLS. Thus, partial defect of RecA's capacity to invade the homologous sister chromatid increases the lifetime of the ssDNA.RecA filament, i.e. the ‘SOS signal’. This increase favors TLS by increasing both the TLS polymerase concentration and the lifetime of the TLS substrate, before it becomes sequestered by homologous recombination. In conclusion, the pathway choice between error-prone TLS and error-free HDGR is controlled by the efficiency of homologous recombination. PMID:27257075
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Xu, Qingfu; Wischmeyer, Jareth; Gonzalez, Eduardo; Pichichero, Michael E
2017-07-01
We sought to understand how polymicrobial colonization varies during health, viral upper respiratory infection (URI) and acute upper respiratory bacterial infection to understand differences in infection-prone vs. non-prone patients. Nasopharyngeal (NP) samples were collected from 74 acute otitis media (AOM) infection-prone and 754 non-prone children during 2094 healthy visits, 673 viral URI visits and 631 AOM visits. Three otopathogens Streptococcus pneumoniae (Spn), Nontypeable Haemophilus influenzae (NTHi), and Moraxella catarrhalis (Mcat) were identified by culture. NP colonization rates of multiple otopathogens during health were significantly lower than during viral URI, and during URI they were lower than at onset of upper respiratory bacterial infection in both AOM infection-prone and non-prone children. AOM infection-prone children had higher polymicrobial colonization rates than non-prone children during health, viral URI and AOM. Polymicrobial colonization rates of AOM infection-prone children during health were equivalent to that of non-prone children during viral URI, and during viral URI were equivalent to that of non-prone during AOM infection. Spn colonization was positively associated with NTHi and Mcat colonization during health, but negatively during AOM infection. The infection-prone patients more frequently have multiple potential bacterial pathogens in the NP than the non-prone patients. Polymicrobial interaction in the NP differs during health and at onset of infection. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
Situating Student Errors: Linguistic-to-Algebra Translation Errors
ERIC Educational Resources Information Center
Adu-Gyamfi, Kwaku; Bossé, Michael J.; Chandler, Kayla
2015-01-01
While it is well recognized that students are prone to difficulties when performing linguistic-to-algebra translations, the nature of students' difficulties remain an issue of contention. Moreover, the literature indicates that these difficulties are not easily remediated by domain-specific instruction. Some have opined that this is the case…
Errors of Inference in Structural Equation Modeling
ERIC Educational Resources Information Center
McCoach, D. Betsy; Black, Anne C.; O'Connell, Ann A.
2007-01-01
Although structural equation modeling (SEM) is one of the most comprehensive and flexible approaches to data analysis currently available, it is nonetheless prone to researcher misuse and misconceptions. This article offers a brief overview of the unique capabilities of SEM and discusses common sources of user error in drawing conclusions from…
Exploring the relationship between boredom and sustained attention.
Malkovsky, Ela; Merrifield, Colleen; Goldberg, Yael; Danckert, James
2012-08-01
Boredom is a common experience, prevalent in neurological and psychiatric populations, yet its cognitive characteristics remain poorly understood. We explored the relationship between boredom proneness, sustained attention and adult symptoms of attention deficit hyperactivity disorder (ADHD). The results showed that high boredom-prone individuals (HBP) performed poorly on measures of sustained attention and showed increased symptoms of ADHD and depression. The results also showed that HBP individuals can be characterised as either apathetic-in which the individual is unconcerned with his/her environment, or as agitated-in which the individual is motivated to engage in meaningful activities, although attempts to do so fail to satisfy. Apathetic boredom proneness was associated with attention lapses, whereas agitated boredom proneness was associated with decreased sensitivity to errors of sustained attention, and increased symptoms of adult ADHD. Our results suggest there is a complex relationship between attention and boredom proneness.
The Concept of Accident Proneness: A Review
Froggatt, Peter; Smiley, James A.
1964-01-01
The term accident proneness was coined by psychological research workers in 1926. Since then its concept—that certain individuals are always more likely than others to sustain accidents, even though exposed to equal risk—has been questioned but seldom seriously challenged. This article describes much of the work and theory on which this concept is based, details the difficulties encountered in obtaining valid information and the interpretative errors that can arise from the examination of imperfect data, and explains why accident proneness became so readily accepted as an explanation of the facts. A recent hypothesis of accident causation, namely that a person's accident liability may vary from time to time, is outlined, and the respective abilities of this and of accident proneness to accord with data from the more reliable literature are examined. The authors conclude that the hypothesis of individual variation in liability is more realistic and in better agreement with the data than is accident proneness. PMID:14106130
Wang, Shijun; Yao, Jianhua; Liu, Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.
2009-01-01
Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice—Once supine and once prone—to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline. PMID:20095272
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
[Risk and risk management in aviation].
Müller, Manfred
2004-10-01
RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.
Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-06-23
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-07-01
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
1980-03-01
interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of
Wheeler, Derek S; Geis, Gary; Mack, Elizabeth H; LeMaster, Tom; Patterson, Mary D
2013-06-01
In situ simulation training is a team-based training technique conducted on actual patient care units using equipment and resources from that unit, and involving actual members of the healthcare team. We describe our experience with in situ simulation training in a major children's medical centre. In situ simulations were conducted using standardised scenarios approximately twice per month on inpatient hospital units on a rotating basis. Simulations were scheduled so that each unit participated in at least two in situ simulations per year. Simulations were conducted on a revolving schedule alternating on the day and night shifts and were unannounced. Scenarios were preselected to maximise the educational experience, and frequently involved clinical deterioration to cardiopulmonary arrest. We performed 64 of the scheduled 112 (57%) in situ simulations on all shifts and all units over 21 months. We identified 134 latent safety threats and knowledge gaps during these in situ simulations, which we categorised as medication, equipment, and/or resource/system threats. Identification of these errors resulted in modification of systems to reduce the risk of error. In situ simulations also provided a method to reinforce teamwork behaviours, such as the use of assertive statements, role clarity, performance of frequent updating, development of a shared mental model, performance of independent double checks of high-risk medicines, and overcoming authority gradients between team members. Participants stated that the training programme was effective and did not disrupt patient care. In situ simulations can identify latent safety threats, identify knowledge gaps, and reinforce teamwork behaviours when used as part of an organisation-wide safety programme.
Infrequent identity mismatches are frequently undetected
Goldinger, Stephen D.
2014-01-01
The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors. PMID:24500751
Medical malpractice, defensive medicine and role of the "media" in Italy.
Toraldo, Domenico M; Vergari, Ughetta; Toraldo, Marta
2015-01-01
For many years until now, Italy has been subjected to an inconsistent and contradictory media campaign. On one hand the "media" present us with bold and reassuring messages about the progress of medical science; on the other hand they are prone to kneejerk criticism every time medical treatment does not have the desired effect, routinely describing such cases as glaring examples of "malasanità", an Italian word of recent coinage used to denote medical malpractice. Newspaper reports of legal proceedings involving health treatment are frequently full of errors and lack any scientific basis. The published data confirm the unsustainably high number of lawsuits against doctors and medical structures, accompanied by demands for compensation arising from true or alleged medical errors or mistakes blamed on the work of health structures. Currently Italian citizens have a greater awareness of their right to health than in the past, and patients' expectations have risen. A discrepancy is emerging between the current state of medical science and the capacities of individual doctors and health structures. Lastly, there is a need for greater monitoring of the quality of health care services and a greater emphasis on health risk prevention.
DNA Repair Mechanisms and the Bypass of DNA Damage in Saccharomyces cerevisiae
Boiteux, Serge; Jinks-Robertson, Sue
2013-01-01
DNA repair mechanisms are critical for maintaining the integrity of genomic DNA, and their loss is associated with cancer predisposition syndromes. Studies in Saccharomyces cerevisiae have played a central role in elucidating the highly conserved mechanisms that promote eukaryotic genome stability. This review will focus on repair mechanisms that involve excision of a single strand from duplex DNA with the intact, complementary strand serving as a template to fill the resulting gap. These mechanisms are of two general types: those that remove damage from DNA and those that repair errors made during DNA synthesis. The major DNA-damage repair pathways are base excision repair and nucleotide excision repair, which, in the most simple terms, are distinguished by the extent of single-strand DNA removed together with the lesion. Mistakes made by DNA polymerases are corrected by the mismatch repair pathway, which also corrects mismatches generated when single strands of non-identical duplexes are exchanged during homologous recombination. In addition to the true repair pathways, the postreplication repair pathway allows lesions or structural aberrations that block replicative DNA polymerases to be tolerated. There are two bypass mechanisms: an error-free mechanism that involves a switch to an undamaged template for synthesis past the lesion and an error-prone mechanism that utilizes specialized translesion synthesis DNA polymerases to directly synthesize DNA across the lesion. A high level of functional redundancy exists among the pathways that deal with lesions, which minimizes the detrimental effects of endogenous and exogenous DNA damage. PMID:23547164
NASA Astrophysics Data System (ADS)
Camargo, F. R.; Henson, B.
2015-02-01
The notion of that more or less of a physical feature affects in different degrees the users' impression with regard to an underlying attribute of a product has frequently been applied in affective engineering. However, those attributes exist only as a premise that cannot directly be measured and, therefore, inferences based on their assessment are error-prone. To establish and improve measurement of latent attributes it is presented in this paper the concept of a stochastic framework using the Rasch model for a wide range of independent variables referred to as an item bank. Based on an item bank, computerized adaptive testing (CAT) can be developed. A CAT system can converge into a sequence of items bracketing to convey information at a user's particular endorsement level. It is through item banking and CAT that the financial benefits of using the Rasch model in affective engineering can be realised.
Meneco, a Topology-Based Gap-Filling Tool Applicable to Degraded Genome-Wide Metabolic Networks
Prigent, Sylvain; Frioux, Clémence; Dittami, Simon M.; Larhlimi, Abdelhalim; Collet, Guillaume; Gutknecht, Fabien; Got, Jeanne; Eveillard, Damien; Bourdon, Jérémie; Plewniak, Frédéric; Tonon, Thierry; Siegel, Anne
2017-01-01
Increasing amounts of sequence data are becoming available for a wide range of non-model organisms. Investigating and modelling the metabolic behaviour of those organisms is highly relevant to understand their biology and ecology. As sequences are often incomplete and poorly annotated, draft networks of their metabolism largely suffer from incompleteness. Appropriate gap-filling methods to identify and add missing reactions are therefore required to address this issue. However, current tools rely on phenotypic or taxonomic information, or are very sensitive to the stoichiometric balance of metabolic reactions, especially concerning the co-factors. This type of information is often not available or at least prone to errors for newly-explored organisms. Here we introduce Meneco, a tool dedicated to the topological gap-filling of genome-scale draft metabolic networks. Meneco reformulates gap-filling as a qualitative combinatorial optimization problem, omitting constraints raised by the stoichiometry of a metabolic network considered in other methods, and solves this problem using Answer Set Programming. Run on several artificial test sets gathering 10,800 degraded Escherichia coli networks Meneco was able to efficiently identify essential reactions missing in networks at high degradation rates, outperforming the stoichiometry-based tools in scalability. To demonstrate the utility of Meneco we applied it to two case studies. Its application to recent metabolic networks reconstructed for the brown algal model Ectocarpus siliculosus and an associated bacterium Candidatus Phaeomarinobacter ectocarpi revealed several candidate metabolic pathways for algal-bacterial interactions. Then Meneco was used to reconstruct, from transcriptomic and metabolomic data, the first metabolic network for the microalga Euglena mutabilis. These two case studies show that Meneco is a versatile tool to complete draft genome-scale metabolic networks produced from heterogeneous data, and to suggest relevant reactions that explain the metabolic capacity of a biological system. PMID:28129330
Meneco, a Topology-Based Gap-Filling Tool Applicable to Degraded Genome-Wide Metabolic Networks.
Prigent, Sylvain; Frioux, Clémence; Dittami, Simon M; Thiele, Sven; Larhlimi, Abdelhalim; Collet, Guillaume; Gutknecht, Fabien; Got, Jeanne; Eveillard, Damien; Bourdon, Jérémie; Plewniak, Frédéric; Tonon, Thierry; Siegel, Anne
2017-01-01
Increasing amounts of sequence data are becoming available for a wide range of non-model organisms. Investigating and modelling the metabolic behaviour of those organisms is highly relevant to understand their biology and ecology. As sequences are often incomplete and poorly annotated, draft networks of their metabolism largely suffer from incompleteness. Appropriate gap-filling methods to identify and add missing reactions are therefore required to address this issue. However, current tools rely on phenotypic or taxonomic information, or are very sensitive to the stoichiometric balance of metabolic reactions, especially concerning the co-factors. This type of information is often not available or at least prone to errors for newly-explored organisms. Here we introduce Meneco, a tool dedicated to the topological gap-filling of genome-scale draft metabolic networks. Meneco reformulates gap-filling as a qualitative combinatorial optimization problem, omitting constraints raised by the stoichiometry of a metabolic network considered in other methods, and solves this problem using Answer Set Programming. Run on several artificial test sets gathering 10,800 degraded Escherichia coli networks Meneco was able to efficiently identify essential reactions missing in networks at high degradation rates, outperforming the stoichiometry-based tools in scalability. To demonstrate the utility of Meneco we applied it to two case studies. Its application to recent metabolic networks reconstructed for the brown algal model Ectocarpus siliculosus and an associated bacterium Candidatus Phaeomarinobacter ectocarpi revealed several candidate metabolic pathways for algal-bacterial interactions. Then Meneco was used to reconstruct, from transcriptomic and metabolomic data, the first metabolic network for the microalga Euglena mutabilis. These two case studies show that Meneco is a versatile tool to complete draft genome-scale metabolic networks produced from heterogeneous data, and to suggest relevant reactions that explain the metabolic capacity of a biological system.
A 62-year-old woman with skin cancer who experienced wrong-site surgery: review of medical error.
Gallagher, Thomas H
2009-08-12
After a life-threatening complication of an injection for neck pain several years ago, Ms W experienced a wrong-site surgery to remove a squamous cell lesion from her nose, followed by pain, distress, and shaken trust in clinicians. Her experience highlights the challenges of communicating with patients after errors. Harmful medical errors occur relatively frequently. Gaps exist between patients' expectations for disclosure and apology and physicians' ability to deliver disclosures well. This discrepancy reflects clinicians' fear of litigation, concern that disclosure might harm patients, and lack of confidence in disclosure skills. Many institutions are developing disclosure programs, and some are reporting success in coupling disclosures with early offers of compensation to patients. However, much has yet to be learned about effective disclosure strategies. Important future developments include increased emphasis on institutions' responsibility for disclosure, involving trainees and other team members in disclosure, and strengthening the relationship between disclosure and quality improvement.
Medication Administration Errors in Nursing Homes Using an Automated Medication Dispensing System
van den Bemt, Patricia M.L.A.; Idzinga, Jetske C.; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
Objective To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. Design The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. Measurements Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. Results In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05–1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66–46.50), medication crushed (OR 7.83; 95% CI 5.40–11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01–1.05), nursing home 2 (OR 3.97; 95% CI 2.86–5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04–4.18), time classes “7–10 am” (OR 2.28; 95% CI 1.50–3.47) and “10 am-2 pm” (OR 1.96; 1.18–3.27) and day of the week “Wednesday” (OR 1.46; 95% CI 1.03–2.07) are associated with a higher risk of administration errors. Conclusions Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload. PMID:19390109
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
Towards Automatic Image Segmentation Using Optimised Region Growing Technique
NASA Astrophysics Data System (ADS)
Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi
Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
Bentley, Johanne; Diggle, Christine P.; Harnden, Patricia; Knowles, Margaret A.; Kiltie, Anne E.
2004-01-01
In human cells DNA double strand breaks (DSBs) can be repaired by the non-homologous end-joining (NHEJ) pathway. In a background of NHEJ deficiency, DSBs with mismatched ends can be joined by an error-prone mechanism involving joining between regions of nucleotide microhomology. The majority of joins formed from a DSB with partially incompatible 3′ overhangs by cell-free extracts from human glioblastoma (MO59K) and urothelial (NHU) cell lines were accurate and produced by the overlap/fill-in of mismatched termini by NHEJ. However, repair of DSBs by extracts using tissue from four high-grade bladder carcinomas resulted in no accurate join formation. Junctions were formed by the non-random deletion of terminal nucleotides and showed a preference for annealing at a microhomology of 8 nt buried within the DNA substrate; this process was not dependent on functional Ku70, DNA-PK or XRCC4. Junctions were repaired in the same manner in MO59K extracts in which accurate NHEJ was inactivated by inhibition of Ku70 or DNA-PKcs. These data indicate that bladder tumour extracts are unable to perform accurate NHEJ such that error-prone joining predominates. Therefore, in high-grade tumours mismatched DSBs are repaired by a highly mutagenic, microhomology-mediated, alternative end-joining pathway, a process that may contribute to genomic instability observed in bladder cancer. PMID:15466592
A false positive food chain error associated with a generic predator gut content ELISA
USDA-ARS?s Scientific Manuscript database
Conventional prey-specific gut content ELISA and PCR assays are useful for identifying predators of insect pests in nature. However, these assays are prone to yielding certain types of food chain errors. For instance, it is possible that prey remains can pass through the food chain as the result of ...
Chauvel, Guillaume; Maquestiaux, François; Hartley, Alan A; Joubert, Sven; Didierjean, André; Masters, Rich S W
2012-01-01
Can motor learning be equivalent in younger and older adults? To address this question, 48 younger (M = 23.5 years) and 48 older (M = 65.0 years) participants learned to perform a golf-putting task in two different motor learning situations: one that resulted in infrequent errors or one that resulted in frequent errors. The results demonstrated that infrequent-error learning predominantly relied on nondeclarative, automatic memory processes whereas frequent-error learning predominantly relied on declarative, effortful memory processes: After learning, infrequent-error learners verbalized fewer strategies than frequent-error learners; at transfer, a concurrent, attention-demanding secondary task (tone counting) left motor performance of infrequent-error learners unaffected but impaired that of frequent-error learners. The results showed age-equivalent motor performance in infrequent-error learning but age deficits in frequent-error learning. Motor performance of frequent-error learners required more attention with age, as evidenced by an age deficit on the attention-demanding secondary task. The disappearance of age effects when nondeclarative, automatic memory processes predominated suggests that these processes are preserved with age and are available even early in motor learning.
Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E
2015-02-01
Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (<70 mg/dL) patient-days in 2010 to 14% in 2011 (p < 0.001) and decreased the observed hypoglycaemia rate from 4.3% of ICU patient-days to 3.4% (p < 0.001). Hypoglycaemic events were frequently recurrent or prolonged (~40%), and these events are not identified by the hypoglycaemic patient-day metric, which also may be confounded by a large number of very low risk or minimally monitored patient-days. Documentation of point-of-care blood glucose measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly.
Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan
2015-11-26
Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm.
Comparing errors in Medicaid reporting across surveys: evidence to date.
Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria
2013-04-01
To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. All available validation studies. Compare results from existing research to understand variation in reporting across surveys. Synthesize all available studies validating survey reports of Medicaid coverage. Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. © Health Research and Educational Trust.
Foster, Patricia L; Niccum, Brittany A; Popodi, Ellen; Townes, Jesse P; Lee, Heewook; MohammedIsmail, Wazim; Tang, Haixu
2018-06-15
Mismatch repair (MMR) is a major contributor to replication fidelity, but its impact varies with sequence context and the nature of the mismatch. Mutation accumulation experiments followed by whole-genome sequencing of MMR-defective E. coli strains yielded ≈30,000 base-pair substitutions, revealing mutational patterns across the entire chromosome. The base-pair substitution spectrum was dominated by A:T > G:C transitions, which occurred predominantly at the center base of 5'N A C3'+5'G T N3' triplets. Surprisingly, growth on minimal medium or at low temperature attenuated these mutations. Mononucleotide runs were also hotspots for base-pair substitutions, and the rate at which these occurred increased with run length. Comparison with ≈2000 base-pair substitutions accumulated in MMR-proficient strains revealed that both kinds of hotspots appeared in the wild-type spectrum and so are likely to be sites of frequent replication errors. In MMR-defective strains transitions were strand biased, occurring twice as often when A and C rather than T and G were on the lagging-strand template. Loss of nucleotide diphosphate kinase increases the cellular concentration of dCTP, which resulted in increased rates of mutations due to misinsertion of C opposite A and T. In an mmr ndk double mutant strain, these mutations were more frequent when the template A and T were on the leading strand, suggesting that lagging-strand synthesis was more error-prone or less well corrected by proofreading than was leading strand synthesis. Copyright © 2018, Genetics.
Oishi, Yoshihisa; Ohta, Hidenobu; Hirose, Takako; Nakaya, Sachiko; Tsuchiya, Keiji; Nakagawa, Machiko; Kusakawa, Isao; Sato, Toshihiro; Obonai, Toshimasa; Nishida, Hiroshi; Yoda, Hitoshi
2018-06-11
The purpose of this study was to determine the effects of body position (prone, supine and lateral) together with sleep status (wake and sleep) on the cardiorespiratory stability of near-term infants. A total of 53 infants (gestational age at birth 33.2 ± 3.5 weeks; birth weight 1,682 ± 521 g; gestational age at recording 38.6 ± 2.1 weeks; weight at recording: 2,273 ± 393 g) were monitored for 24 hours for clinically significant apnea (>15 seconds), bradycardia (<100 bpm), and oxygen desaturation (SpO 2 < 90%) in alternating body positions (prone, supine and lateral) by cardiorespiratory monitors and 3-orthogonal-axis accelerometers. Sleep status of the infants was also continuously monitored by actigraphs. No apnea was observed. During wake, severe bradycardia was most frequently observed in the lateral position while, during sleep, severe bradycardia was most frequently observed in the supine position. Desaturation was most frequently observed in the supine and lateral positions during both wake and sleep. Our study suggests that the cardiorespiratory stability of infants is significantly compromised by both body position and sleep status. During both wake and sleep, prone position induces the most stable cardiorespiratory functions of near-term infants.
ERIC Educational Resources Information Center
Al Baghal, Tarek
2017-01-01
Prior studies suggest memories are potentially error prone. Proactive dependent interviewing (PDI) is a possible method to reduce errors in reports of change in longitudinal studies, reminding respondents of previous answers while asking if there has been any change since the last survey. However, little research has been conducted on the impact…
The Error Prone Model and the Basic Grants Validation Selection System. Draft Final Report.
ERIC Educational Resources Information Center
System Development Corp., Falls Church, VA.
An evaluation of existing and proposed mechanisms to ensure data accuracy for the Pell Grant program is reported, and recommendations for efficient detection of fraud and error in the program are offered. One study objective was to examine the existing system of pre-established criteria (PEC), which are validation criteria that select students on…
Estimation of a cover-type change matrix from error-prone data
Steen Magnussen
2009-01-01
Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...
EEG and chaos: Description of underlying dynamics and its relation to dissociative states
NASA Technical Reports Server (NTRS)
Ray, William J.
1994-01-01
The goal of this work is the identification of states especially as related to the process of error production and lapses of awareness as might be experienced during aviation. Given the need for further articulation of the characteristics of 'error prone state' or 'hazardous state of awareness,' this NASA grant focused on basic ground work for the study of the psychophysiology of these states. In specific, the purpose of this grant was to establish the necessary methodology for addressing three broad questions. The first is how the error prone state should be conceptualized, and whether it is similar to a dissociative state, a hypnotic state, or absent mindedness. Over 1200 subjects completed a variety of psychometric measures reflecting internal states and proneness to mental lapses and absent mindedness; the study suggests that there exists a consistency of patterns displayed by individuals who self-report dissociative experiences such that those individuals who score high on measures of dissociation also score high on measures of absent mindedness, errors, and absorption, but not on scales of hypnotizability. The second broad question is whether some individuals are more prone to enter these states than others. A study of 14 young adults who scored either high or low on the dissociation experiences scale performed a series of six tasks. This study suggests that high and low dissociative individuals arrive at the experiment in similar electrocortical states and perform cognitive tasks (e.g., mental math) in a similar manner; it is in the processing of internal emotional states that differences begin to emerge. The third question to be answered is whether recent research in nonlinear dynamics, i.e., chaos, offer an addition and/or alternative to traditional signal processing methods, i.e., fast Fourier transforms, and whether chaos procedures can be modified to offer additional information useful in identifying brain states. A preliminary review suggests that current nonlinear dynamical techniques such as dimensional analysis can be successfully applied to electrocortical activity. Using the data set developed in the study of the young adults, chaos analyses using the Farmer algorithm were performed; it is concluded that dimensionality measures reflect information not contained in traditional EEG Fourier analysis.
Rahel Sollmann; Angela M. White; Beth Gardner; Patricia N. Manley
2015-01-01
Small mammals comprise an important component of forest vertebrate communities. Our understanding of how small mammals use forested habitat has relied heavily on studies in forest systems not naturally prone to frequent disturbances. Small mammal populations that evolved in frequent-fire forests, however, may be less restricted to specific habitat conditions due to the...
Comparing Errors in Medicaid Reporting across Surveys: Evidence to Date
Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria
2013-01-01
Objective To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. Data Sources All available validation studies. Study Design Compare results from existing research to understand variation in reporting across surveys. Data Collection Methods Synthesize all available studies validating survey reports of Medicaid coverage. Principal Findings Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Conclusions Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. PMID:22816493
Weeden, Clare E.; Chen, Yunshun; Ma, Stephen B.; Hu, Yifang; Ramm, Georg; Sutherland, Kate D.; Smyth, Gordon K.
2017-01-01
Lung squamous cell carcinoma (SqCC), the second most common subtype of lung cancer, is strongly associated with tobacco smoking and exhibits genomic instability. The cellular origins and molecular processes that contribute to SqCC formation are largely unexplored. Here we show that human basal stem cells (BSCs) isolated from heavy smokers proliferate extensively, whereas their alveolar progenitor cell counterparts have limited colony-forming capacity. We demonstrate that this difference arises in part because of the ability of BSCs to repair their DNA more efficiently than alveolar cells following ionizing radiation or chemical-induced DNA damage. Analysis of mice harbouring a mutation in the DNA-dependent protein kinase catalytic subunit (DNA-PKcs), a key enzyme in DNA damage repair by nonhomologous end joining (NHEJ), indicated that BSCs preferentially repair their DNA by this error-prone process. Interestingly, polyploidy, a phenomenon associated with genetically unstable cells, was only observed in the human BSC subset. Expression signature analysis indicated that BSCs are the likely cells of origin of human SqCC and that high levels of NHEJ genes in SqCC are correlated with increasing genomic instability. Hence, our results favour a model in which heavy smoking promotes proliferation of BSCs, and their predilection for error-prone NHEJ could lead to the high mutagenic burden that culminates in SqCC. Targeting DNA repair processes may therefore have a role in the prevention and therapy of SqCC. PMID:28125611
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Pappas, Derek J; Marin, Wesley; Hollenbach, Jill A; Mack, Steven J
2016-03-01
Bridging ImmunoGenomic Data-Analysis Workflow Gaps (BIGDAWG) is an integrated data-analysis pipeline designed for the standardized analysis of highly-polymorphic genetic data, specifically for the HLA and KIR genetic systems. Most modern genetic analysis programs are designed for the analysis of single nucleotide polymorphisms, but the highly polymorphic nature of HLA and KIR data require specialized methods of data analysis. BIGDAWG performs case-control data analyses of highly polymorphic genotype data characteristic of the HLA and KIR loci. BIGDAWG performs tests for Hardy-Weinberg equilibrium, calculates allele frequencies and bins low-frequency alleles for k×2 and 2×2 chi-squared tests, and calculates odds ratios, confidence intervals and p-values for each allele. When multi-locus genotype data are available, BIGDAWG estimates user-specified haplotypes and performs the same binning and statistical calculations for each haplotype. For the HLA loci, BIGDAWG performs the same analyses at the individual amino-acid level. Finally, BIGDAWG generates figures and tables for each of these comparisons. BIGDAWG obviates the error-prone reformatting needed to traffic data between multiple programs, and streamlines and standardizes the data-analysis process for case-control studies of highly polymorphic data. BIGDAWG has been implemented as the bigdawg R package and as a free web application at bigdawg.immunogenomics.org. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
"Jumping to conclusions" in delusion-prone participants: an experimental economics approach.
van der Leer, Leslie; McKay, Ryan
2014-01-01
That delusional and delusion-prone individuals "jump to conclusions" on probabilistic reasoning tasks is a key finding in cognitive neuropsychiatry. Here we focused on a less frequently investigated aspect of "jumping to conclusions" (JTC): certainty judgments. We incorporated rigorous procedures from experimental economics to eliminate potential confounds of miscomprehension and motivation and systematically investigated the effect of incentives on task performance. Low- and high-delusion-prone participants (n = 109) completed a series of computerised trials; on each trial, they were shown a black or a white fish, caught from one of the two lakes containing fish of both colours in complementary ratios. In the betting condition, participants were given £4 to distribute over the two lakes as they wished; in the control condition, participants simply provided an estimate of how probable each lake was. Deviations from Bayesian probabilities were investigated. Whereas high-delusion-prone participants in both the control and betting conditions underestimated the Bayesian probabilities (i.e. were conservative), low-delusion-prone participants in the control condition underestimated but those in the betting condition provided accurate estimates. In the control condition, there was a trend for high-delusion-prone participants to give higher estimates than low-delusion-prone participants, which is consistent with previous reports of "jumping to conclusions" in delusion-prone participants. However, our findings in the betting condition, where high-delusion-prone participants provided lower estimates than low-delusion-prone participants (who were accurate), are inconsistent with the jumping-to-conclusions effect in both a relative and an absolute sense. Our findings highlight the key role of task incentives and underscore the importance of comparing the responses of delusion-prone participants to an objective rational standard as well as to the responses of non-delusion-prone participants.
STARS Proceedings (3-4 December 1991)
1991-12-04
PROJECT PROCESS OBJECTIVES & ASSOCIATED METRICS: Prioritize ECPs: complexity & error-history measures 0 Make vs Buy decisions: Effort & Quality (or...history measures, error- proneness and past histories of trouble with particular modules are very useful measures. Make vs Buy decisions: Does the...Effort offset the gain in Quality relative to buy ... Effort and Quality (or defect rate) histories give helpful indications of how to make this decision
Defense Mapping Agency (DMA) Raster-to-Vector Analysis
1984-11-30
model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected
Inducible DNA-repair systems in yeast: competition for lesions.
Mitchel, R E; Morrison, D P
1987-03-01
DNA lesions may be recognized and repaired by more than one DNA-repair process. If two repair systems with different error frequencies have overlapping lesion specificity and one or both is inducible, the resulting variable competition for the lesions can change the biological consequences of these lesions. This concept was demonstrated by observing mutation in yeast cells (Saccharomyces cerevisiae) exposed to combinations of mutagens under conditions which influenced the induction of error-free recombinational repair or error-prone repair. Total mutation frequency was reduced in a manner proportional to the dose of 60Co-gamma- or 254 nm UV radiation delivered prior to or subsequent to an MNNG exposure. Suppression was greater per unit radiation dose in cells gamma-irradiated in O2 as compared to N2. A rad3 (excision-repair) mutant gave results similar to wild-type but mutation in a rad52 (rec-) mutant exposed to MNNG was not suppressed by radiation. Protein-synthesis inhibition with heat shock or cycloheximide indicated that it was the mutation due to MNNG and not that due to radiation which had changed. These results indicate that MNNG lesions are recognized by both the recombinational repair system and the inducible error-prone system, but that gamma-radiation induction of error-free recombinational repair resulted in increased competition for the lesions, thereby reducing mutation. Similarly, gamma-radiation exposure resulted in a radiation dose-dependent reduction in mutation due to MNU, EMS, ENU and 8-MOP + UVA, but no reduction in mutation due to MMS. These results suggest that the number of mutational MMS lesions recognizable by the recombinational repair system must be very small relative to those produced by the other agents. MNNG induction of the inducible error-prone systems however, did not alter mutation frequencies due to ENU or MMS exposure but, in contrast to radiation, increased the mutagenic effectiveness of EMS. These experiments demonstrate that in this lower eukaryote, mutagen exposure does not necessarily result in a fixed risk of mutation, but that the risk can be markedly influenced by a variety of external stimuli including heat shock or exposure to other mutagens.
An Analysis of Misconceptions in Science Textbooks: Earth science in England and Wales
NASA Astrophysics Data System (ADS)
King, Chris John Henry
2010-03-01
Surveys of the earth science content of all secondary (high school) science textbooks and related publications used in England and Wales have revealed high levels of error/misconception. The 29 science textbooks or textbook series surveyed (51 texts in all) showed poor coverage of National Curriculum earth science and contained a mean level of one earth science error/misconception per page. Science syllabuses and examinations surveyed also showed errors/misconceptions. More than 500 instances of misconception were identified through the surveys. These were analysed for frequency, indicating that those areas of the earth science curriculum most prone to misconception are sedimentary processes/rocks, earthquakes/Earth's structure, and plate tectonics. For the 15 most frequent misconceptions, examples of quotes from the textbooks are given, together with the scientific consensus view, a discussion, and an example of a misconception of similar significance in another area of science. The misconceptions identified in the surveys are compared with those described in the literature. This indicates that the misconceptions found in college students and pre-service/practising science teachers are often also found in published materials, and therefore are likely to reinforce the misconceptions in teachers and their students. The analysis may also reflect the prevalence earth science misconceptions in the UK secondary (high school) science-teaching population. The analysis and discussion provide the opportunity for writers of secondary science materials to improve their work on earth science and to provide a platform for improved teaching and learning of earth science in the future.
Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar
2010-04-01
Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.
Coleman, Aaron B; Lam, Diane P; Soowal, Lara N
2015-01-01
Gaining an understanding of how science works is central to an undergraduate education in biology and biochemistry. The reasoning required to design or interpret experiments that ask specific questions does not come naturally, and is an essential part of the science process skills that must be learned for an understanding of how scientists conduct research. Gaps in these reasoning skills make it difficult for students to become proficient in reading primary scientific literature. In this study, we assessed the ability of students in an upper-division biochemistry laboratory class to use the concepts of correlation, necessity, and sufficiency in interpreting experiments presented in a format and context that is similar to what they would encounter when reading a journal article. The students were assessed before and after completion of a laboratory module where necessary vs. sufficient reasoning was used to design and interpret experiments. The assessment identified two types of errors that were commonly committed by students when interpreting experimental data. When presented with an experiment that only establishes a correlation between a potential intermediate and a known effect, students frequently interpreted the intermediate as being sufficient (causative) for the effect. Also, when presented with an experiment that tests only necessity for an intermediate, they frequently made unsupported conclusions about sufficiency, and vice versa. Completion of the laboratory module and instruction in necessary vs. sufficient reasoning showed some promise for addressing these common errors. © 2015 The International Union of Biochemistry and Molecular Biology.
Clustered Mutation Signatures Reveal that Error-Prone DNA Repair Targets Mutations to Active Genes.
Supek, Fran; Lehner, Ben
2017-07-27
Many processes can cause the same nucleotide change in a genome, making the identification of the mechanisms causing mutations a difficult challenge. Here, we show that clustered mutations provide a more precise fingerprint of mutagenic processes. Of nine clustered mutation signatures identified from >1,000 tumor genomes, three relate to variable APOBEC activity and three are associated with tobacco smoking. An additional signature matches the spectrum of translesion DNA polymerase eta (POLH). In lymphoid cells, these mutations target promoters, consistent with AID-initiated somatic hypermutation. In solid tumors, however, they are associated with UV exposure and alcohol consumption and target the H3K36me3 chromatin of active genes in a mismatch repair (MMR)-dependent manner. These regions normally have a low mutation rate because error-free MMR also targets H3K36me3 chromatin. Carcinogens and error-prone repair therefore redistribute mutations to the more important regions of the genome, contributing a substantial mutation load in many tumors, including driver mutations. Copyright © 2017 Elsevier Inc. All rights reserved.
A statistical approach to evaluate flood risk at the regional level: an application to Italy
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate flood risk statistical characterization, the proposed procedure could be applied straightforward outside the national borders, particularly in areas with similar geo-environmental settings.
Zimmerman, Stefan L; Kim, Woojin; Boonn, William W
2011-01-01
Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.
Swimming and other activities: applied aspects of fish swimming performance
Castro-Santos, Theodore R.; Farrell, A.P.
2011-01-01
Human activities such as hydropower development, water withdrawals, and commercial fisheries often put fish species at risk. Engineered solutions designed to protect species or their life stages are frequently based on assumptions about swimming performance and behaviors. In many cases, however, the appropriate data to support these designs are either unavailable or misapplied. This article provides an overview of the state of knowledge of fish swimming performance – where the data come from and how they are applied – identifying both gaps in knowledge and common errors in application, with guidance on how to avoid repeating mistakes, as well as suggestions for further study.
Prone position for the prevention of lung infection.
Beuret, P
2002-04-01
Pulmonary infection is frequent in brain injured patients. It has been identified as an independent predictor of unfavorable neurological outcome, calling for attempts of prevention. We recently evaluated intermittent prone positioning for the prevention of ventilator-associated pneumonia (VAP) in comatose brain injured patients, in a randomized study. 25 patients were included in the prone position (PP) group: they were positioned on prone four hours once daily until they could get up to sit in an armchair; 26 patients were included in the supine position (SP) group. The main characteristics of the patients from the two groups were similar at randomization. The primary end-point was the incidence of lung worsening, defined by an increase in the Lung Injury Score by at least one point since the time of randomization. The incidence of lung worsening was lower in the PP group (12%) than in the SP group (50%) (p=0.003). The incidence of VAP was 38.4% in the SP group and 20% in the PP group (p=0.14). There was no serious complication attributable to prone positioning. In conclusion, the beneficial effect of prone positioning for prevention of lung infection in brain injured patients is not well established. However, in those patients, prone positioning is able to avoid the worsening of pulmonary function, especially in oxygenation.
A hydrostatic weighing method using total lung capacity and a small tank.
Warner, J G; Yeater, R; Sherwood, L; Weber, K
1986-01-01
The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596
A hydrostatic weighing method using total lung capacity and a small tank.
Warner, J G; Yeater, R; Sherwood, L; Weber, K
1986-03-01
The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.
Stem revenue losses with effective CDM management.
Alwell, Michael
2003-09-01
Effective CDM management not only minimizes revenue losses due to denied claims, but also helps eliminate administrative costs associated with correcting coding errors. Accountability for CDM management should be assigned to a single individual, who ideally reports to the CFO or high-level finance director. If your organization is prone to making billing errors due to CDM deficiencies, you should consider purchasing CDM software to help you manage your CDM.
Baumann, Claudia; Wang, Xiaotian; Yang, Luhan; Viveiros, Maria M
2017-04-01
Mouse oocytes lack canonical centrosomes and instead contain unique acentriolar microtubule-organizing centers (aMTOCs). To test the function of these distinct aMTOCs in meiotic spindle formation, pericentrin (Pcnt), an essential centrosome/MTOC protein, was knocked down exclusively in oocytes by using a transgenic RNAi approach. Here, we provide evidence that disruption of aMTOC function in oocytes promotes spindle instability and severe meiotic errors that lead to pronounced female subfertility. Pcnt-depleted oocytes from transgenic (Tg) mice were ovulated at the metaphase-II stage, but show significant chromosome misalignment, aneuploidy and premature sister chromatid separation. These defects were associated with loss of key Pcnt-interacting proteins (γ-tubulin, Nedd1 and Cep215) from meiotic spindle poles, altered spindle structure and chromosome-microtubule attachment errors. Live-cell imaging revealed disruptions in the dynamics of spindle assembly and organization, together with chromosome attachment and congression defects. Notably, spindle formation was dependent on Ran GTPase activity in Pcnt-deficient oocytes. Our findings establish that meiotic division is highly error-prone in the absence of Pcnt and disrupted aMTOCs, similar to what reportedly occurs in human oocytes. Moreover, these data underscore crucial differences between MTOC-dependent and -independent meiotic spindle assembly. © 2017. Published by The Company of Biologists Ltd.
De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly
Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan
2015-01-01
Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm. DOI: http://dx.doi.org/10.7554/eLife.10586.001 PMID:26609813
Random mutagenesis of BoNT/E Hc nanobody to construct a secondary phage-display library.
Shahi, B; Mousavi Gargari, S L; Rasooli, I; Rajabi Bazl, M; Hoseinpoor, R
2014-08-01
To construct secondary mutant phage-display library of recombinant single variable domain (VHH) against botulinum neurotoxin E by error-prone PCR. The gene coding for specific VHH derived from the camel immunized with binding domain of botulinum neurotoxin E (BoNT/E) was amplified by error-prone PCR. Several biopanning rounds were used to screen the phage-displaying BoNT/E Hc nanobodies. The final nanobody, SHMR4, with increased affinity recognized BoNT/E toxin with no cross-reactivity with other antigens especially with related BoNT toxins. The constructed nanobody could be a suitable candidate for VHH-based biosensor production to detect the Clostridium botulinum type E. Diagnosis and treatment of botulinum neurotoxins are important. Generation of high-affinity antibodies based on the construction of secondary libraries using affinity maturation step leads to the development of reagents for precise diagnosis and therapy. © 2014 The Society for Applied Microbiology.
Isolation and characterization of high affinity aptamers against DNA polymerase iota.
Lakhin, Andrei V; Kazakov, Andrei A; Makarova, Alena V; Pavlov, Yuri I; Efremova, Anna S; Shram, Stanislav I; Tarantul, Viacheslav Z; Gening, Leonid V
2012-02-01
Human DNA-polymerase iota (Pol ι) is an extremely error-prone enzyme and the fidelity depends on the sequence context of the template. Using the in vitro systematic evolution of ligands by exponential enrichment (SELEX) procedure, we obtained an oligoribonucleotide with a high affinity to human Pol ι, named aptamer IKL5. We determined its dissociation constant with homogenous preparation of Pol ι and predicted its putative secondary structure. The aptamer IKL5 specifically inhibits DNA-polymerase activity of the purified enzyme Pol ι, but did not inhibit the DNA-polymerase activities of human DNA polymerases beta and kappa. IKL5 suppressed the error-prone DNA-polymerase activity of Pol ι also in cellular extracts of the tumor cell line SKOV-3. The aptamer IKL5 is useful for studies of the biological role of Pol ι and as a potential drug to suppress the increase of the activity of this enzyme in malignant cells.
Forgetting What Was Where: The Fragility of Object-Location Binding
Pertzov, Yoni; Dong, Mia Yuan; Peich, Muy-Cheng; Husain, Masud
2012-01-01
Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object’s identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These ‘swap’ errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time. PMID:23118956
Breaks in the 45S rDNA Lead to Recombination-Mediated Loss of Repeats.
Warmerdam, Daniël O; van den Berg, Jeroen; Medema, René H
2016-03-22
rDNA repeats constitute the most heavily transcribed region in the human genome. Tumors frequently display elevated levels of recombination in rDNA, indicating that the repeats are a liability to the genomic integrity of a cell. However, little is known about how cells deal with DNA double-stranded breaks in rDNA. Using selective endonucleases, we show that human cells are highly sensitive to breaks in 45S but not the 5S rDNA repeats. We find that homologous recombination inhibits repair of breaks in 45S rDNA, and this results in repeat loss. We identify the structural maintenance of chromosomes protein 5 (SMC5) as contributing to recombination-mediated repair of rDNA breaks. Together, our data demonstrate that SMC5-mediated recombination can lead to error-prone repair of 45S rDNA repeats, resulting in their loss and thereby reducing cellular viability. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Robust parameter extraction for decision support using multimodal intensive care data
Clifford, G.D.; Long, W.J.; Moody, G.B.; Szolovits, P.
2008-01-01
Digital information flow within the intensive care unit (ICU) continues to grow, with advances in technology and computational biology. Recent developments in the integration and archiving of these data have resulted in new opportunities for data analysis and clinical feedback. New problems associated with ICU databases have also arisen. ICU data are high-dimensional, often sparse, asynchronous and irregularly sampled, as well as being non-stationary, noisy and subject to frequent exogenous perturbations by clinical staff. Relationships between different physiological parameters are usually nonlinear (except within restricted ranges), and the equipment used to measure the observables is often inherently error-prone and biased. The prior probabilities associated with an individual's genetics, pre-existing conditions, lifestyle and ongoing medical treatment all affect prediction and classification accuracy. In this paper, we describe some of the key problems and associated methods that hold promise for robust parameter extraction and data fusion for use in clinical decision support in the ICU. PMID:18936019
Recognizing the Presidents: Was Alexander Hamilton President?
Roediger, Henry L; DeSoto, K Andrew
2016-05-01
Studies over the past 40 years have shown that Americans can recall about half the U.S. presidents. Do people know the presidents even though they are unable to access them for recall? We investigated this question using the powerful cues of a recognition test. Specifically, we tested the ability of 326 online subjects to recognize U.S. presidents when presented with their full names among various types of lures. The hit rate for presidential recognition was .88, well above the proportion produced in free recall but far from perfect. Presidents Franklin Pierce and Chester Arthur were recognized less than 60% of the time. Interestingly, four nonpresidents were falsely recognized at relatively high rates, and Alexander Hamilton was more frequently identified as president than were several actual presidents. Even on a recognition test, knowledge of American presidents is imperfect and prone to error. The false alarm data support the theory that false fame can arise from contextual familiarity. © The Author(s) 2016.
Enabling fast charging – A battery technology gap assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; and thermal management and pack designs to accommodate the higher operating voltage.
Enabling fast charging – A battery technology gap assessment
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; ...
2017-10-23
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; and thermal management and pack designs to accommodate the higher operating voltage.
Nelson, Lindsay D.; Patrick, Christopher J.; Bernat, Edward M.
2010-01-01
The externalizing dimension is viewed as a broad dispositional factor underlying risk for numerous disinhibitory disorders. Prior work has documented deficits in event-related brain potential (ERP) responses in individuals prone to externalizing problems. Here, we constructed a direct physiological index of externalizing vulnerability from three ERP indicators and evaluated its validity in relation to criterion measures in two distinct domains: psychometric and physiological. The index was derived from three ERP measures that covaried in their relations with externalizing proneness the error-related negativity and two variants of the P3. Scores on this ERP composite predicted psychometric criterion variables and accounted for externalizing-related variance in P3 response from a separate task. These findings illustrate how a diagnostic construct can be operationalized as a composite (multivariate) psychophysiological variable (phenotype). PMID:20573054
McDonald, Catherine C; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas; Lee, Yi-Ching; Bonfiglio, Dana; Fisher, Donald L; Winston, Flaura K
Collisions at left turn intersections are among the most prevalent types of teen driver serious crashes, with inadequate surveillance as a key factor. Risk awareness perception training (RAPT) has shown effectiveness in improving hazard anticipation for latent hazards. The goal of this study was to determine if RAPT version 3 (RAPT-3) improved intersection turning behaviors among novice teen drivers when the hazards were not latent and frequent glancing to multiple locations at the intersection was needed. Teens aged 16-18 with ≤180 days of licensure were randomly assigned to: 1) an intervention group (n=18) that received RAPT-3 (Trained); or 2) a control group (n=19) that received no training (Untrained). Both groups completed RAPT-3 Baseline Assessment and the Trained group completed RAPT-3 Training and RAPT-3 Post Assessment. Training effects were evaluated on a driving simulator. Simulator ( gap selection errors and collisions ) and eye tracker ( traffic check errors) metrics from six left-turn stop sign controlled intersections in the Simulated Driving Assessment (SDA) were analyzed. The Trained group scored significantly higher in RAPT-3 Post Assessment than RAPT-3 Baseline Assessment (p< 0.0001). There were no significant differences in either traffic check and gap selection errors or collisions among Trained and Untrained teens in the SDA. Though Trained teens learned about hazard anticipation related to latent hazards, learning did not translate to performance differences in left-turn stop sign controlled intersections where the hazards were not latent. Our findings point to further research to better understand the challenges teens have with left turn intersections.
Multiple two-polymerase mechanisms in mammalian translesion DNA synthesis.
Livneh, Zvi; Ziv, Omer; Shachar, Sigal
2010-02-15
The encounter of replication forks with DNA lesions may lead to fork arrest and/or the formation of single-stranded gaps. A major strategy to cope with these replication irregularities is translesion DNA synthesis (TLS), in which specialized error-prone DNA polymerases bypass the blocking lesions. Recent studies suggest that TLS across a particular DNA lesion may involve as many as four different TLS polymerases, acting in two-polymerase reactions in which insertion by a particular polymerase is followed by extension by another polymerase. Insertion determines the accuracy and mutagenic specificity of the TLS reaction, and is carried out by one of several polymerases such as poleta, polkappa or poliota. In contrast, extension is carried out primarily by polzeta. In cells from XPV patients, which are deficient in TLS across cyclobutane pyrimidine dimers (CPD) due to a deficiency in poleta, TLS is carried out by at least two backup reactions each involving two polymerases: One reaction involves polkappa and polzeta, and the other poliota and polzeta. These mechanisms may also assist poleta in normal cells under an excessive amount of UV lesions.
Belief-bias reasoning in non-clinical delusion-prone individuals.
Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R
2017-03-01
It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Belief-bias reasoning in non-clinical delusion-prone individuals.
Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R
2017-09-01
It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Perspective-taking abilities in the balance between autism tendencies and psychosis proneness.
Abu-Akel, Ahmad M; Wood, Stephen J; Hansen, Peter C; Apperly, Ian A
2015-06-07
Difficulties with the ability to appreciate the perspective of others (mentalizing) is central to both autism and schizophrenia spectrum disorders. While the disorders are diagnostically independent, they can co-occur in the same individual. The effect of such co-morbidity is hypothesized to worsen mentalizing abilities. The recent influential 'diametric brain theory', however, suggests that the disorders are etiologically and phenotypically diametrical, predicting opposing effects on one's mentalizing abilities. To test these contrasting hypotheses, we evaluated the effect of psychosis and autism tendencies on the perspective-taking (PT) abilities of 201 neurotypical adults, on the assumption that autism tendencies and psychosis proneness are heritable dimensions of normal variation. We show that while both autism tendencies and psychosis proneness induce PT errors, their interaction reduced these errors. Our study is, to our knowledge, the first to observe that co-occurring autistic and psychotic traits can exert opposing influences on performance, producing a normalizing effect possibly by way of their diametrical effects on socio-cognitive abilities. This advances the notion that some individuals may, to some extent, be buffered against developing either illness or present fewer symptoms owing to a balanced expression of autistic and psychosis liability. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Community Development in Drought-Prone Areas.
ERIC Educational Resources Information Center
Belakere, Ramegowda; Jayaramaiah, K. M.
1997-01-01
A survey of 100 farmers and 120 government development workers in India showed that farmers felt seeds, fertilizer, and relief employment were inadequate, while livestock feeding and soil/water conservation were helpful government interventions for drought. A large gap appeared between farmers' and government workers' perceptions of the…
18 CFR 801.8 - Flood plain management and protection.
Code of Federal Regulations, 2014 CFR
2014-04-01
... nonstructural nature for the protection of flood plains subject to frequent flooding. (3) Assist in the study and classification of flood prone lands to ascertain the relative risk of flooding, and establish...
18 CFR 801.8 - Flood plain management and protection.
Code of Federal Regulations, 2012 CFR
2012-04-01
... nonstructural nature for the protection of flood plains subject to frequent flooding. (3) Assist in the study and classification of flood prone lands to ascertain the relative risk of flooding, and establish...
18 CFR 801.8 - Flood plain management and protection.
Code of Federal Regulations, 2013 CFR
2013-04-01
... nonstructural nature for the protection of flood plains subject to frequent flooding. (3) Assist in the study and classification of flood prone lands to ascertain the relative risk of flooding, and establish...
Disclosure of Medical Errors in Oman
Norrish, Mark I. K.
2015-01-01
Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463
Laterality, spatial abilities, and accident proneness.
Voyer, Susan D; Voyer, Daniel
2015-01-01
Although handedness as a measure of cerebral specialization has been linked to accident proneness, more direct measures of laterality are rarely considered. The present study aimed to fill that gap in the existing research. In addition, individual difference factors in accident proneness were further examined with the inclusion of mental rotation and navigation abilities measures. One hundred and forty participants were asked to complete the Mental Rotations Test, the Santa Barbara Sense of Direction scale, the Greyscales task, the Fused Dichotic Word Test, the Waterloo Handedness Questionnaire, and a grip strength task before answering questions related to number of accidents in five areas. Results indicated that handedness scores, absolute visual laterality score, absolute response time on the auditory laterality index, and navigation ability were significant predictors of the total number of accidents. Results are discussed with respect to cerebral hemispheric specialization and risk-taking attitudes and behavior.
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DOT National Transportation Integrated Search
2011-03-01
Traffic Management applications such as ramp metering, incident detection, travel time prediction, and vehicle : classification greatly depend on the accuracy of data collected from inductive loop detectors, but these data are : prone to various erro...
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Cardoso-Cita, Z; Perea-Pérez, B; Albarrán-Juan, M E; Labajo-González, M E; López-Durán, L; Marco-Martínez, F; Santiago-Saéz, A
2016-01-01
Traumatology and Orthopaedic Surgery is one of the specialities with most complaints due to its scope and complexity. The aim of this study is to determine the characteristics of the complaints made against medical specialists in Traumatology, taking into account those variables that might have an influence both on the presenting of the complaint as well as on the resolving of the process. An analysis was performed on 303 legal judgments (1995-2011) collected in the health legal judgements archive of the Madrid School of Medicine, which is linked to the Westlaw Aranzadi data base. Civil jurisdiction was the most used. The specific processes with most complaints were bone-joint disorders followed by vascular-nerve problems and infections. The injury claimed against most was in the lower limb, particularly the knee. The most frequent general cause of complaint was surgical treatment error, followed by diagnostic error. There was lack of information in 14.9%. There was sentencing in 49.8% of the cases, with compensation mainly being less than 50,000 euros. Traumatology and Orthopaedic Surgery is a speciality prone to complaints due to malpractice. The number of sentences against traumatologists is high, but compensations are usually less than 50,000 euros. The main reason for sentencing is surgical treatment error; thus being the basic surgical procedure and where precautions should be maximised. The judgements due to lack of information are high, with adequate doctor-patient communication being essential as well as the correct completion of the informed consent. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.
Critical older driver errors in a national sample of serious U.S. crashes.
Cicchino, Jessica B; McCartt, Anne T
2015-07-01
Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frequently Asked Questions about the Indian Environmental General Assistance Program (GAP)
Answers to frequently asked questions about the Indian Environmental General Assistance Program (GAP) Guidance on the Award and Management of General Assistance Agreements for Tribes and Intertribal Consortia (Guidance)
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
Lee, Kwangha; Kim, Mi-Young; Yoo, Jung-Wan; Hong, Sang-Bum; Lim, Chae-Man
2010-01-01
Background/Aims Ventilating patients with acute respiratory distress syndrome (ARDS) in the prone position has been shown to improve arterial oxygenation, but prolonged prone positioning frequently requires continuous deep sedation, which may be harmful to patients. We evaluated the meaning of early gas exchange in patients with severe ARDS under prolonged (≥ 12 hours) prone positioning. Methods We retrospectively studied 96 patients (mean age, 60.1 ± 15.6 years; 75% men) with severe ARDS (PaO2/FiO2 ≤ 150 mmHg) admitted to a medical intensive care unit (MICU). The terms "PaO2 response" and "PaCO2 response" represented responses that resulted in increases in the PaO2/FiO2 ratio of ≥ 20 mmHg and decreases in PaCO2 of ≥ 1 mmHg, respectively, 8 to 12 hours after first placement in the prone position. Results The mean duration of prone positioning was 78.5 ± 61.2 hours, and the 28-day mortality rate after MICU admission was 56.3%. No significant difference in clinical characteristics was observed between PaO2 and PaCO2 responders and non-responders. The PaO2 responders after prone positioning showed an improved 28-day outcome, compared with non-responders by Kaplan-Meier survival estimates (p < 0.05 by the log-rank test), but the PaCO2 responders did not. Conclusions Our results suggest that the early oxygenation improvement after prone positioning might be associated with an improved 28-day outcome and may be an indicator to maintain prolonged prone positioning in patients with severe ARDS. PMID:20195404
Enabling fast charging - A battery technology gap assessment
NASA Astrophysics Data System (ADS)
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; Tanim, Tanvir; Dufek, Eric J.; Pesaran, Ahmad; Burnham, Andrew; Carlson, Richard B.; Dias, Fernando; Hardy, Keith; Keyser, Matthew; Kreuzer, Cory; Markel, Anthony; Meintz, Andrew; Michelbacher, Christopher; Mohanpurkar, Manish; Nelson, Paul A.; Robertson, David C.; Scoffield, Don; Shirk, Matthew; Stephens, Thomas; Vijayagopal, Ram; Zhang, Jiucai
2017-11-01
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; measure temperature distributions during fast charge to enable/validate models; and develop thermal management and pack designs to accommodate the higher operating voltage.
Grunting's competitive advantage: Considerations of force and distraction
Maglinti, Cj; Kingstone, Alan
2018-01-01
Background Grunting is pervasive in many athletic contests, and empirical evidence suggests that it may result in one exerting more physical force. It may also distract one's opponent. That grunts can distract was supported by a study showing that it led to an opponent being slower and more error prone when viewing tennis shots. An alternative explanation was that grunting masks the sound of a ball being hit. The present study provides evidence against this alternative explanation by testing the effect of grunting in a sport—mixed martial arts—where distraction, rather than masking, is the most likely mechanism. Methodology/Principal findings We first confirmed that kicking force is increased when a grunt is performed (Experiment 1), and then adapted methodology used in the tennis study to mixed martial arts (Experiment 2). Lifting the foot to kick is a silent act, and therefore there is nothing for a grunt to mask, i.e., its effect on an opponent’s response time and/or accuracy can likely be attributed to attentional distraction. Participants viewed videos of a trained mixed martial artist kicking that included, or did not include, a simulated grunt. The task was to determine as quickly as possible whether the kick was traveling upward or downward. Overall, and replicating the tennis finding, the present results indicate that a participant's response to a kick was delayed and more error prone when a simulated grunt was present. Conclusions/Significance The present findings indicate that simulated grunting may distract an opponent, leading to slower and more error prone responses. The implications for martial arts in particular, and the broader question of whether grunting should be perceived as 'cheating' in sports, are examined. PMID:29470505
Grunting's competitive advantage: Considerations of force and distraction.
Sinnett, Scott; Maglinti, Cj; Kingstone, Alan
2018-01-01
Grunting is pervasive in many athletic contests, and empirical evidence suggests that it may result in one exerting more physical force. It may also distract one's opponent. That grunts can distract was supported by a study showing that it led to an opponent being slower and more error prone when viewing tennis shots. An alternative explanation was that grunting masks the sound of a ball being hit. The present study provides evidence against this alternative explanation by testing the effect of grunting in a sport-mixed martial arts-where distraction, rather than masking, is the most likely mechanism. We first confirmed that kicking force is increased when a grunt is performed (Experiment 1), and then adapted methodology used in the tennis study to mixed martial arts (Experiment 2). Lifting the foot to kick is a silent act, and therefore there is nothing for a grunt to mask, i.e., its effect on an opponent's response time and/or accuracy can likely be attributed to attentional distraction. Participants viewed videos of a trained mixed martial artist kicking that included, or did not include, a simulated grunt. The task was to determine as quickly as possible whether the kick was traveling upward or downward. Overall, and replicating the tennis finding, the present results indicate that a participant's response to a kick was delayed and more error prone when a simulated grunt was present. The present findings indicate that simulated grunting may distract an opponent, leading to slower and more error prone responses. The implications for martial arts in particular, and the broader question of whether grunting should be perceived as 'cheating' in sports, are examined.
Lo, Te-Wen; Pickle, Catherine S; Lin, Steven; Ralston, Edward J; Gurling, Mark; Schartner, Caitlin M; Bian, Qian; Doudna, Jennifer A; Meyer, Barbara J
2013-10-01
Exploitation of custom-designed nucleases to induce DNA double-strand breaks (DSBs) at genomic locations of choice has transformed our ability to edit genomes, regardless of their complexity. DSBs can trigger either error-prone repair pathways that induce random mutations at the break sites or precise homology-directed repair pathways that generate specific insertions or deletions guided by exogenously supplied DNA. Prior editing strategies using site-specific nucleases to modify the Caenorhabditis elegans genome achieved only the heritable disruption of endogenous loci through random mutagenesis by error-prone repair. Here we report highly effective strategies using TALE nucleases and RNA-guided CRISPR/Cas9 nucleases to induce error-prone repair and homology-directed repair to create heritable, precise insertion, deletion, or substitution of specific DNA sequences at targeted endogenous loci. Our robust strategies are effective across nematode species diverged by 300 million years, including necromenic nematodes (Pristionchus pacificus), male/female species (Caenorhabditis species 9), and hermaphroditic species (C. elegans). Thus, genome-editing tools now exist to transform nonmodel nematode species into genetically tractable model organisms. We demonstrate the utility of our broadly applicable genome-editing strategies by creating reagents generally useful to the nematode community and reagents specifically designed to explore the mechanism and evolution of X chromosome dosage compensation. By developing an efficient pipeline involving germline injection of nuclease mRNAs and single-stranded DNA templates, we engineered precise, heritable nucleotide changes both close to and far from DSBs to gain or lose genetic function, to tag proteins made from endogenous genes, and to excise entire loci through targeted FLP-FRT recombination.
Ketkar, Amit; Zafar, Maroof K; Banerjee, Surajit; Marquez, Victor E; Egli, Martin; Eoff, Robert L
2012-06-27
Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2'-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2'-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle, which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base-stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase.
Ketkar, Amit; Zafar, Maroof K.; Banerjee, Surajit; Marquez, Victor E.; Egli, Martin; Eoff, Robert L
2012-01-01
Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2′-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2′-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle (χ), which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase. PMID:22632140
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...
2017-02-15
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter
2017-01-01
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Large-scale contamination of microbial isolate genomes by Illumina PhiX control.
Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia; Kyrpides, Nikos C; Pati, Amrita
2015-01-01
With the rapid growth and development of sequencing technologies, genomes have become the new go-to for exploring solutions to some of the world's biggest challenges such as searching for alternative energy sources and exploration of genomic dark matter. However, progress in sequencing has been accompanied by its share of errors that can occur during template or library preparation, sequencing, imaging or data analysis. In this study we screened over 18,000 publicly available microbial isolate genome sequences in the Integrated Microbial Genomes database and identified more than 1000 genomes that are contaminated with PhiX, a control frequently used during Illumina sequencing runs. Approximately 10% of these genomes have been published in literature and 129 contaminated genomes were sequenced under the Human Microbiome Project. Raw sequence reads are prone to contamination from various sources and are usually eliminated during downstream quality control steps. Detection of PhiX contaminated genomes indicates a lapse in either the application or effectiveness of proper quality control measures. The presence of PhiX contamination in several publicly available isolate genomes can result in additional errors when such data are used in comparative genomics analyses. Such contamination of public databases have far-reaching consequences in the form of erroneous data interpretation and analyses, and necessitates better measures to proofread raw sequences before releasing them to the broader scientific community.
Cutter, Jayne; Jordan, Sue
2013-11-01
To examine the frequency of, and factors influencing, reporting of mucocutaneous and percutaneous injuries in operating theatres. Surgeons and peri-operative nurses risk acquiring blood-borne viral infections during surgical procedures. Appropriate first-aid and prophylactic treatment after an injury can significantly reduce the risk of infection. However, studies indicate that injuries often go unreported. The 'systems approach' to error reduction relies on reporting incidents and near misses. Failure to report will compromise safety. A postal survey of all surgeons and peri-operative nurses engaged in exposure prone procedures in nine Welsh hospitals, face-to-face interviews with selected participants and telephone interviews with Infection Control Nurses. The response rate was 51.47% (315/612). Most respondents reported one or more percutaneous (183/315, 58.1%) and/or mucocutaneous injuries (68/315, 21.6%) in the 5 years preceding the study. Only 54.9% (112/204) reported every injury. Surgeons were poorer at reporting: 70/133 (52.6%) reported all or >50% of their injuries compared with 65/71 nurses (91.5%). Injuries are frequently under-reported, possibly compromising safety in operating theatres. A significant number of inoculation injuries are not reported. Factors influencing under-reporting were identified. This knowledge can assist managers in improving reporting and encouraging a robust safety culture within operating departments. © 2012 John Wiley & Sons Ltd.
Effect of lethality on the extinction and on the error threshold of quasispecies.
Tejero, Hector; Marín, Arturo; Montero, Francisco
2010-02-21
In this paper the effect of lethality on error threshold and extinction has been studied in a population of error-prone self-replicating molecules. For given lethality and a simple fitness landscape, three dynamic regimes can be obtained: quasispecies, error catastrophe, and extinction. Using a simple model in which molecules are classified as master, lethal and non-lethal mutants, it is possible to obtain the mutation rates of the transitions between the three regimes analytically. The numerical resolution of the extended model, in which molecules are classified depending on their Hamming distance to the master sequence, confirms the results obtained in the simple model and shows how an error catastrophe regime changes when lethality is taken in account. (c) 2009 Elsevier Ltd. All rights reserved.
Ability/Motivation Interactions in Complex Skill Acquisition
1988-04-28
attentional resources. Finally, in the declarative knowledge phase, performance is slow and error prone. Once the learner has come to an adequate cognitive...mediation by the learner. After a substantial amount of consistent task practice, skilled performance becomes fast , accurate, and the task can often be
DNA polymerase η mutational signatures are found in a variety of different types of cancer.
Rogozin, Igor B; Goncearenco, Alexander; Lada, Artem G; De, Subhajyoti; Yurchenko, Vyacheslav; Nudelman, German; Panchenko, Anna R; Cooper, David N; Pavlov, Youri I
2018-01-01
DNA polymerase (pol) η is a specialized error-prone polymerase with at least two quite different and contrasting cellular roles: to mitigate the genetic consequences of solar UV irradiation, and promote somatic hypermutation in the variable regions of immunoglobulin genes. Misregulation and mistargeting of pol η can compromise genome integrity. We explored whether the mutational signature of pol η could be found in datasets of human somatic mutations derived from normal and cancer cells. A substantial excess of single and tandem somatic mutations within known pol η mutable motifs was noted in skin cancer as well as in many other types of human cancer, suggesting that somatic mutations in A:T bases generated by DNA polymerase η are a common feature of tumorigenesis. Another peculiarity of pol ηmutational signatures, mutations in YCG motifs, led us to speculate that error-prone DNA synthesis opposite methylated CpG dinucleotides by misregulated pol η in tumors might constitute an additional mechanism of cytosine demethylation in this hypermutable dinucleotide.
Inducible error-prone repair in B. subtilis. Final report, September 1, 1979-June 30, 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasbin, R. E.
1981-06-01
The research performed under this contract has been concentrated on the relationship between inducible DNA repair systems, mutagenesis and the competent state in the gram positive bacterium Bacillus subtilis. The following results have been obtained from this research: (1) competent Bacillus subtilis cells have been developed into a sensitive tester system for carcinogens; (2) competent B. subtilis cells have an efficient excision-repair system, however, this system will not function on bacteriophage DNA taken into the cell via the process of transfection; (3) DNA polymerase III is essential in the mechanism of the process of W-reactivation; (4) B. subtilis strains curedmore » of their defective prophages have been isolated and are now being developed for gene cloning systems; (5) protoplasts of B. subtilis have been shown capable of acquiring DNA repair enzymes (i.e., enzyme therapy); and (6) a plasmid was characterized which enhanced inducible error-prone repair in a gram positive organism.« less
A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction
NASA Astrophysics Data System (ADS)
Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf
2018-07-01
Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.
Road-Crossing Safety in Virtual Reality: A Comparison of Adolescents With and Without ADHD
ERIC Educational Resources Information Center
Clancy, Tamera A.; Rucklidge, Julia J.; Owen, Dean
2006-01-01
This study investigated the potential accident-proneness of adolescents with attention deficit hyperactivity disorder (ADHD) in a hazardous road-crossing environment. An immersive virtual reality traffic gap-choice task was used to determine whether ADHD adolescents show more unsafe road-crossing behavior than controls. Participants (ages 13 to…
Somatic stem cells and the kinetics of mutagenesis and carcinogenesis
Cairns, John
2002-01-01
There is now strong experimental evidence that epithelial stem cells arrange their sister chromatids at mitosis such that the same template DNA strands stay together through successive divisions; DNA labeled with tritiated thymidine in infancy is still present in the stem cells of adult mice even though these cells are incorporating (and later losing) bromodeoxyuridine [Potten, C. S., Owen, G., Booth, D. & Booth, C. (2002) J. Cell Sci.115, 2381–2388]. But a cell that preserves “immortal strands” will avoid the accumulation of replication errors only if it inhibits those pathways for DNA repair that involve potentially error-prone resynthesis of damaged strands, and this appears to be a property of intestinal stem cells because they are extremely sensitive to the lethal effects of agents that damage DNA. It seems that the combination, in the stem cell, of immortal strands and the choice of death rather than error-prone repair makes epithelial stem cell systems resistant to short exposures to DNA-damaging agents, because the stem cell accumulates few if any errors, and any errors made by the daughters are destined to be discarded. This paper discusses these issues and shows that they lead to a model that explains the strange kinetics of mutagenesis and carcinogenesis in adult mammalian tissues. Coincidentally, the model also can explain why cancers arise even though the spontaneous mutation rate of differentiated mammalian cells is not high enough to generate the multiple mutations needed to form a cancer and why loss of nucleotide-excision repair does not significantly increase the frequency of the common internal cancers. PMID:12149477
Hybrid learning in signalling games
NASA Astrophysics Data System (ADS)
Barrett, Jeffrey A.; Cochran, Calvin T.; Huttegger, Simon; Fujiwara, Naoki
2017-09-01
Lewis-Skyrms signalling games have been studied under a variety of low-rationality learning dynamics. Reinforcement dynamics are stable but slow and prone to evolving suboptimal signalling conventions. A low-inertia trial-and-error dynamical like win-stay/lose-randomise is fast and reliable at finding perfect signalling conventions but unstable in the context of noise or agent error. Here we consider a low-rationality hybrid of reinforcement and win-stay/lose-randomise learning that exhibits the virtues of both. This hybrid dynamics is reliable, stable and exceptionally fast.
Reduced vision selectively impairs spatial updating in fall-prone older adults.
Barrett, Maeve M; Doheny, Emer P; Setti, Annalisa; Maguinness, Corrina; Foran, Timothy G; Kenny, Rose Anne; Newell, Fiona N
2013-01-01
The current study examined the role of vision in spatial updating and its potential contribution to an increased risk of falls in older adults. Spatial updating was assessed using a path integration task in fall-prone and healthy older adults. Specifically, participants conducted a triangle completion task in which they were guided along two sides of a triangular route and were then required to return, unguided, to the starting point. During the task, participants could either clearly view their surroundings (full vision) or visuo-spatial information was reduced by means of translucent goggles (reduced vision). Path integration performance was measured by calculating the distance and angular deviation from the participant's return point relative to the starting point. Gait parameters for the unguided walk were also recorded. We found equivalent performance across groups on all measures in the full vision condition. In contrast, in the reduced vision condition, where participants had to rely on interoceptive cues to spatially update their position, fall-prone older adults made significantly larger distance errors relative to healthy older adults. However, there were no other performance differences between fall-prone and healthy older adults. These findings suggest that fall-prone older adults, compared to healthy older adults, have greater difficulty in reweighting other sensory cues for spatial updating when visual information is unreliable.
Evaluation of Post Flooding Shoulder Reconditioning : final report.
DOT National Transportation Integrated Search
2017-02-01
The Ohio Department of Transportation (ODOT) Holmes County Garage has to frequently maintain the shoulders of the hilly and curvy highways, which are prone to shoulder erosion and material loss due to floods or heavy rain. Currently, the problematic ...
Teaching Smart People How to Learn.
ERIC Educational Resources Information Center
Argyris, Chris
1991-01-01
Professionals frequently are least able to learn because they have rarely experienced learning-related failure and are prone to defensive reasoning. Companies can become learning organizations by helping managers and employees learn to analyze their behavior and learn productively. (SK)
Critical incident reporting in emergency medicine: results of the prehospital reports.
Hohenstein, Christian; Hempel, Dorothea; Schultheis, Kerstin; Lotter, Oliver; Fleischmann, Thomas
2014-05-01
Medical errors frequently contribute to morbidity and mortality. Prehospital emergency medicine is prone to incidents that can lead to immediate deadly consequences. Critical incident reporting can identify typical problems and be the basis for structured risk management in order to reduce and mitigate these incidents. We set up a free access internet website for German-speaking countries, with an anonymous reporting system for emergency medical services personnel. After a 7-year study period, an expert team analysed and classified the incidents into staff related, equipment related, organisation and tactics, or other. 845 reports were entered in the study period. Physicians reported 44% of incidents, paramedics 42%. Most patients were in a life-threatening or potentially life-threatening situation (82%), and only 53% of all incidents had no influence on the outcome of the patient. Staff-related problems were responsible for 56% of the incidents, when it came to harm, 78% of these incidents were staff related. Incident reporting in prehospital emergency medicine can identify system weaknesses. Most of the incidents were reported during care of patients in life-threatening conditions with a high impact on patient outcome. Staff-related problems contributed to the most frequent and most severe incidents. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Association of cytokine gene polymorphisms and risk factors with otitis media proneness in children.
Miljanović, Olivera; Cikota-Aleksić, Bojana; Likić, Dragan; Vojvodić, Danilo; Jovićević, Ognjen; Magić, Zvonko
2016-06-01
In order to assess the association between gene polymorphisms and otitis media (OM) proneness, tumor necrosis factor alpha (TNFA) -308, interleukin (IL) 10-1082 and -3575, IL6 -597, IL2 -330, and CD14 -159 genotyping was performed in 58 OM-prone children and 85 controls who were exposed to similar number and frequency of environmental and host risk factors. The frequencies of genotypes (wild type vs. genotypes containing at least one polymorphic allele) were not significantly different between groups, except for IL10 -1082. Polymorphic genotypes IL10 -1082 GA and GG were more frequent in OM-prone children than in control group (RR 1.145, 95 % CI 1.011-1.298; p = 0.047). However, logistic regression did not confirm IL10 -1082 polymorphic genotypes as an independent risk factor for OM proneness. The present study indicates that high-producing IL10 -1082 GA/GG genotypes may increase the risk for OM proneness in its carriers when exposed to other environmental/host risk factors (day care attendance, passive smoking, male sex, respiratory infections, and atopic manifestations). This study revealed no significant independent genetic association, but the lack of breastfeeding in infancy was found to be the only independent risk factor for development of OM-prone phenotype, implying that breastfeeding had a protective role in development of susceptibility to OM. • The pathogenesis of OM is of multifactorial nature, dependent on infection, environmental factors, and immune response of the child. • Cytokines and CD14 play an important role in the presentation and clinical course of otitis media, but a clear link with otitis media proneness was not established. What is new: • This is the first clinical and genetic study on Montenegrin children with the otitis media-prone phenotype. • The study revealed that high-producing IL10 -1082 genotypes may influence otitis media proneness in children exposed to other environmental/host risk factors.
Adopting Extensible Business Reporting Language (XBRL): A Grounded Theory
ERIC Educational Resources Information Center
Cruz, Marivic
2010-01-01
In 2007 and 2008, government challenges consisted of error prone, manually intensive, and inefficient environments for financial reporting. Banking regulators worldwide faced issues with respect to transparency, timeliness, quality, and managing risks associated with accounting opacity. The general problem was the existing reporting standards and…
2014-10-02
intervals (Neil, Tailor, Marquez, Fenton , & Hear, 2007). This is cumbersome, error prone and usually inaccurate. Even though a universal framework...Science. Neil, M., Tailor, M., Marquez, D., Fenton , N., & Hear. (2007). Inference in Bayesian networks using dynamic discretisation. Statistics
Systematic study of error sources in supersonic skin-friction balance measurements
NASA Technical Reports Server (NTRS)
Allen, J. M.
1976-01-01
An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.
Sheerin, Fintan K; Curtis, Elizabeth; de Vries, Jan
2012-06-01
This pilot study sought to examine the relationship between functional health patterns and accident proneness. A quantitative-descriptive design was employed assessing accident proneness by collecting data on the occurrence of accidents among a sample of university graduates, and examining this in relation to biographical data and information collated using the Functional Health Pattern Assessment Screening Tool (FHPAST). Data were analyzed using descriptive and inferential statistics. One FHPAST factor predicted more frequent sports accidents. Age was also shown to be a significant predictor but in a counterintuitive way, with greater age predicting less accident proneness. The FHPAST may have a role to play in accident prediction. Functional health pattern assessment may be useful for predicting accidents. © 2012, The Authors. International Journal of Nursing Knowledge © 2012, NANDA International.
Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G
2007-10-01
Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.
Heft Lemisphere: Exchanges Predominate in Segmental Speech Errors
ERIC Educational Resources Information Center
Nooteboom, Sieb G.; Quene, Hugo
2013-01-01
In most collections of segmental speech errors, exchanges are less frequent than anticipations and perseverations. However, it has been suggested that in inner speech exchanges might be more frequent than either anticipations or perseverations, because many half-way repaired errors (Yew...uhh...New York) are classified as repaired anticipations,…
Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veldeman, Liv, E-mail: liv.veldeman@uzgent.be; Department of Radiotherapy and Experimental Cancer Research, Ghent University, Ghent; Schiettecatte, Kimberly
Purpose: To report the 2-year cosmetic outcome of a randomized trial comparing prone and supine whole-breast irradiation in large-breasted patients. Methods and Materials: One hundred patients with a (European) cup size ≥C were included. Before and 2 years after radiation therapy, clinical endpoints were scored and digital photographs were taken with the arms alongside the body and with the arms elevated 180°. Three observers rated the photographs using the 4-point Harvard cosmesis scale. Cosmesis was also evaluated with the commercially available Breast Cancer Conservation Treatment.cosmetic results (BCCT.core) software. Results: Two-year follow-up data and photographs were available for 94 patients (47 supine treatedmore » and 47 prone treated). Patient and treatment characteristics were not significantly different between the 2 cohorts. A worsening of color change occurred more frequently in the supine than in the prone cohort (19/46 vs 10/46 patients, respectively, P=.04). Five patients in the prone group (11%) and 12 patients in the supine group (26%) presented with a worse scoring of edema at 2-year follow-up (P=.06). For retraction and fibrosis, no significant differences were found between the 2 cohorts, although scores were generally worse in the supine cohort. The cosmetic scoring by 3 observers did not reveal differences between the prone and supine groups. On the photographs with the hands up, 7 patients in the supine group versus none in the prone group had a worsening of cosmesis of 2 categories using the (BCCT.org) software (P=.02). Conclusion: With a limited follow-up of 2 years, better cosmetic outcome was observed in prone-treated than in supine-treated patients.« less
Simplified stereo-optical ultrasound plane calibration
NASA Astrophysics Data System (ADS)
Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan
2013-03-01
Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
List of Error-Prone Abbreviations, Symbols, and Dose Designations
... unit dose (e.g., diltiazem 125 mg IV infusion “UD” misin- terpreted as meaning to give the entire infusion as a unit [bolus] dose) Use “as directed” ... Names Intended Meaning Misinterpretation Correction “Nitro” drip nitroglycerin infusion Mistaken as sodium nitroprusside infusion Use complete drug ...
Improving Advising Using Technology and Data Analytics
ERIC Educational Resources Information Center
Phillips, Elizabeth D.
2013-01-01
Traditionally, the collegiate advising system provides each student with a personal academic advisor who designs a pathway to the degree for that student in face-to-face meetings. Ideally, this is a supportive mentoring relationship. In truth, however, this system is highly inefficient, error prone, expensive, and a source of ubiquitous student…
Finite element modeling of light propagation in fruit under illumination of continuous-wave beam
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
Finite element simulation of light transfer in turbid media under structured illumination
USDA-ARS?s Scientific Manuscript database
Spatial-frequency domain (SFD) imaging technique allows to estimate the optical properties of biological tissues in a wide field of view. The technique is, however, prone to error in measurement because the two crucial assumptions used for deriving the analytical solution to diffusion approximation ...
Propensity Score Weighting with Error-Prone Covariates
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Lockwood, J. R.; Setodji, Claude M.
2011-01-01
Inverse probability weighting (IPW) estimates are widely used in applications where data are missing due to nonresponse or censoring or in observational studies of causal effects where the counterfactuals cannot be observed. This extensive literature has shown the estimators to be consistent and asymptotically normal under very general conditions,…
ERIC Educational Resources Information Center
Sylwester, Robert
1994-01-01
Studies show our emotional system is a complex, widely distributed, and error-prone system that defines our basic personality early in life and is quite resistant to change. This article describes our emotional system's major parts (the peptides that carry emotional information and the body and brain structures that activate and regulate emotions)…
Online Hand Holding in Fixing Computer Glitches
ERIC Educational Resources Information Center
Goldsborough, Reid
2005-01-01
According to most surveys, computer manufacturers such as HP puts out reliable products, and computers in general are less troublesome than in the past. But personal computers are still prone to bugs, conflicts, viruses, spyware infestations, hacker and phishing attacks, and--most of all--user error. Unfortunately, technical support from computer…
Optimizing DNA assembly based on statistical language modelling.
Fang, Gang; Zhang, Shemin; Dong, Yafei
2017-12-15
By successively assembling genetic parts such as BioBrick according to grammatical models, complex genetic constructs composed of dozens of functional blocks can be built. However, usually every category of genetic parts includes a few or many parts. With increasing quantity of genetic parts, the process of assembling more than a few sets of these parts can be expensive, time consuming and error prone. At the last step of assembling it is somewhat difficult to decide which part should be selected. Based on statistical language model, which is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence, the most commonly used parts will be selected. Then, a dynamic programming algorithm was designed to figure out the solution of maximum probability. The algorithm optimizes the results of a genetic design based on a grammatical model and finds an optimal solution. In this way, redundant operations can be reduced and the time and cost required for conducting biological experiments can be minimized. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Enabling fast charging – A battery technology gap assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; measure temperature distributions during fast charge to enable/validate models; and develop thermal management and pack designs to accommodate the higher operating voltage.
Enabling fast charging – A battery technology gap assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; measure temperature distributions during fast charge to enable / validate models; and develop thermal management and pack designs to accommodate the higher operating voltage.
ERIC Educational Resources Information Center
Manik, Sadhana
2009-01-01
Globalisation has allowed people with scarce skills to cross national borders with ease. Given their specific skills base professionals are prone to trans-national migration. The trend is for professionals from developing countries, such as South Africa, to fill gaps in the labour market in developed countries such as the United Kingdom. The…
Description of a Simple Method of Stoma Protection During Prone Positioning.
Mackert, Gina A; Reid, Christopher M; Dobke, Marek K; Tenenhaus, Mayer
2016-06-01
Surgeries conducted with the patient in the prone position are frequent and can be lengthy. Abdominal stomas and su- prapubic catheters require protection for the complete duration of the procedure to avoid complications such as stomal ischemia, bleeding, or mucocutaneous separation. Standard protection strategies such as pillows and wedges can eas- ily fail. In the course of managing several patients who had sustained ostomy complications following surgery in a prone position, a simple method of stoma protection was devised. Instead of discarding the foam headrest typically used dur- ing induction by anesthesia staff, this device is placed with its central recess over the stoma and secured to the patient's abdominal wall with gentle tape just before turning the patient into a prone position. This method, used in more than 80 patients, has been found to effectively relieve pressure, and no complications have been observed. The foam shape also enables unobstructed drainage of fluids, facilitating collection and preventing leakage and contamination of the surgical field. Because the device is widely used by anesthesia, it is readily available and does not add any extra cost.
webpic: A flexible web application for collecting distance and count measurements from images
2018-01-01
Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592
vhMentor: An Ontology Supported Mobile Agent System for Pervasive Health Care Monitoring.
Christopoulou, Stella C; Kotsilieris, Theodore; Anagnostopoulos, Ioannis; Anagnostopoulos, Christos-Nikolaos; Mylonas, Phivos
2017-01-01
Healthcare provision is a set of activities that demands the collaboration of several stakeholders (e.g. physicians, nurses, managers, patients etc.) who hold distinct expertise and responsibilities. In addition, medical knowledge is diversely located and often shared under no central coordination and supervision authority, while medical data flows remain mostly passive regarding the way data is delivered to both clinicians and patients. In this paper, we propose the implementation of a virtual health Mentor (vhMentor) which stands as a dedicated ontology schema and FIPA compliant agent system. Agent technology proves to be ideal for developing healthcare applications due to its distributed operation over systems and data sources of high heterogeneity. Agents are able to perform their tasks by acting pro-actively in order to assist individuals to overcome limitations posed during accessing medical data and executing non-automatic error-prone processes. vhMentor further comprises the Jess rules engine in order to implement reasoning logic. Thus, on the one hand vhMentor is a prototype that fills the gap between healthcare systems and the care provision community, while on the other hand allows the blending of next generation distributed services in healthcare domain.
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
SHIOZAKI, Maki; MIYAI, Nobuyuki; MORIOKA, Ikuharu; UTSUMI, Miyoko; HATTORI, Sonomi; KOIKE, Hiroaki; ARITA, Mikio; MIYASHITA, Kazuhisa
2017-01-01
This study examined the association between job-related behavioral characteristics and the risk of coronary heart diseases (CHD) in Japanese male police officers. Compared to office clerks, police officers exhibited greater age-related increases of the prevalence of CHD risk factors, and a clustering number of CHD risk factors was significantly higher in the group of those over 45 yr of age. Among the police officers, coronary-prone behavior was more frequent than that seen in office clerks. The police officers with coronary-prone behavior tended to engage in shift work and to work overtime more; yet they were less likely to perceive job stress and to express the relevant physical and psychological symptoms than those without coronary-prone behavior. The subjects with such behavioral characteristics had a significantly greater number of CHD risk factors. In a multiple regression analysis, coronary-prone behavior together with age, social support, walking hours per day, and amount of alcohol consumption were selected as significant determinants of a cluster of CHD risk factors. These results suggest that coronary-prone behavior may contribute to the higher prevalence of CHD risk factors in police officers via leading the long working hours and the work-related unfavorable lifestyles, such as alcohol drinking and physical inactivity. PMID:28428501
Shiozaki, Maki; Miyai, Nobuyuki; Morioka, Ikuharu; Utsumi, Miyoko; Hattori, Sonomi; Koike, Hiroaki; Arita, Mikio; Miyashita, Kazuhisa
2017-08-08
This study examined the association between job-related behavioral characteristics and the risk of coronary heart diseases (CHD) in Japanese male police officers. Compared to office clerks, police officers exhibited greater age-related increases of the prevalence of CHD risk factors, and a clustering number of CHD risk factors was significantly higher in the group of those over 45 yr of age. Among the police officers, coronary-prone behavior was more frequent than that seen in office clerks. The police officers with coronary-prone behavior tended to engage in shift work and to work overtime more; yet they were less likely to perceive job stress and to express the relevant physical and psychological symptoms than those without coronary-prone behavior. The subjects with such behavioral characteristics had a significantly greater number of CHD risk factors. In a multiple regression analysis, coronary-prone behavior together with age, social support, walking hours per day, and amount of alcohol consumption were selected as significant determinants of a cluster of CHD risk factors. These results suggest that coronary-prone behavior may contribute to the higher prevalence of CHD risk factors in police officers via leading the long working hours and the work-related unfavorable lifestyles, such as alcohol drinking and physical inactivity.
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
The Diagnosis of Error in Histories of Science
NASA Astrophysics Data System (ADS)
Thomas, William
Whether and how to diagnose error in the history of science is a contentious issue. For many scientists, diagnosis is appealing because it allows them to discuss how knowledge can progress most effectively. Many historians disagree. They consider diagnosis inappropriate because it may discard features of past actors' thought that are important to understanding it, and may have even been intellectually productive. Ironically, these historians are apt to diagnose flaws in scientists' histories as proceeding from a misguided desire to idealize scientific method, and from their attendant identification of deviations from the ideal as, ipso facto, a paramount source of error in historical science. While both views have some merit, they should be reconciled if a more harmonious and productive relationship between the disciplines is to prevail. In To Explain the World, Steven Weinberg narrates the slow but definite emergence of what we call science from long traditions of philosophical and mathematical thought. This narrative follows in a historiographical tradition charted by historians such as Alexandre Koyre and Rupert Hall about sixty years ago. It is essentially a history of the emergence of reliable (if fallible) scientific method from more error-prone thought. While some historians such as Steven Shapin view narratives of this type as fundamentally error-prone, I do not view such projects as a priori illegitimate. They are, however, perhaps more difficult than Weinberg supposes. In this presentation, I will focus on two of Weinberg's strong historical claims: that physics became detached from religion as early as the beginning of the eighteenth century, and that physics proved an effective model for placing other fields on scientific grounds. While I disagree with these claims, they represent at most an overestimation of vintage science's interest in discarding theological questions, and an overestimation of that science's ability to function at all reliably.
Kozmin, Stanislav G.; Jinks-Robertson, Sue
2013-01-01
Following the irradiation of nondividing yeast cells with ultraviolet (UV) light, most induced mutations are inherited by both daughter cells, indicating that complementary changes are introduced into both strands of duplex DNA prior to replication. Early analyses demonstrated that such two-strand mutations depend on functional nucleotide excision repair (NER), but the molecular mechanism of this unique type of mutagenesis has not been further explored. In the experiments reported here, an ade2 adeX colony-color system was used to examine the genetic control of UV-induced mutagenesis in nondividing cultures of Saccharomyces cerevisiae. We confirmed a strong suppression of two-strand mutagenesis in NER-deficient backgrounds and demonstrated that neither mismatch repair nor interstrand crosslink repair affects the production of these mutations. By contrast, proteins involved in the error-prone bypass of DNA damage (Rev3, Rev1, PCNA, Rad18, Pol32, and Rad5) and in the early steps of the DNA-damage checkpoint response (Rad17, Mec3, Ddc1, Mec1, and Rad9) were required for the production of two-strand mutations. There was no involvement, however, for the Pol η translesion synthesis DNA polymerase, the Mms2-Ubc13 postreplication repair complex, downstream DNA-damage checkpoint factors (Rad53, Chk1, and Dun1), or the Exo1 exonuclease. Our data support models in which UV-induced mutagenesis in nondividing cells occurs during the Pol ζ-dependent filling of lesion-containing, NER-generated gaps. The requirement for specific DNA-damage checkpoint proteins suggests roles in recruiting and/or activating factors required to fill such gaps. PMID:23307894
Farseer-NMR: automatic treatment, analysis and plotting of large, multi-variable NMR data.
Teixeira, João M C; Skinner, Simon P; Arbesú, Miguel; Breeze, Alexander L; Pons, Miquel
2018-05-11
We present Farseer-NMR ( https://git.io/vAueU ), a software package to treat, evaluate and combine NMR spectroscopic data from sets of protein-derived peaklists covering a range of experimental conditions. The combined advances in NMR and molecular biology enable the study of complex biomolecular systems such as flexible proteins or large multibody complexes, which display a strong and functionally relevant response to their environmental conditions, e.g. the presence of ligands, site-directed mutations, post translational modifications, molecular crowders or the chemical composition of the solution. These advances have created a growing need to analyse those systems' responses to multiple variables. The combined analysis of NMR peaklists from large and multivariable datasets has become a new bottleneck in the NMR analysis pipeline, whereby information-rich NMR-derived parameters have to be manually generated, which can be tedious, repetitive and prone to human error, or even unfeasible for very large datasets. There is a persistent gap in the development and distribution of software focused on peaklist treatment, analysis and representation, and specifically able to handle large multivariable datasets, which are becoming more commonplace. In this regard, Farseer-NMR aims to close this longstanding gap in the automated NMR user pipeline and, altogether, reduce the time burden of analysis of large sets of peaklists from days/weeks to seconds/minutes. We have implemented some of the most common, as well as new, routines for calculation of NMR parameters and several publication-quality plotting templates to improve NMR data representation. Farseer-NMR has been written entirely in Python and its modular code base enables facile extension.
Visual programming for next-generation sequencing data analytics.
Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia
2016-01-01
High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.
Surface driven biomechanical breast image registration
NASA Astrophysics Data System (ADS)
Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H.; Kabus, Sven; Lorenz, Cristian; Buelow, Thomas; Williams, Norman R.; Keshtgar, M.; Hawkes, David J.
2016-03-01
Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position. The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.
Park, Se-yeon; Yoo, Won-gyu
2013-10-01
The aim of this study was to compare muscular activation during five different normalization techniques that induced maximal isometric contraction of the latissimus dorsi. Sixteen healthy men participated in the study. Each participant performed three repetitions each of five types of isometric exertion: (1) conventional shoulder extension in the prone position, (2) caudal shoulder depression in the prone position, (3) body lifting with shoulder depression in the seated position, (4) trunk bending to the right in the lateral decubitus position, and (5) downward bar pulling in the seated position. In most participants, maximal activation of the latissimus dorsi was observed during conventional shoulder extension in the prone position; the percentage of maximal voluntary contraction was significantly greater for this exercise than for all other normalization techniques except downward bar pulling in the seated position. Although differences in electrode placement among various electromyographic studies represent a limitation, normalization techniques for the latissimus dorsi are recommended to minimize error in assessing maximal muscular activation of the latissimus dorsi through the combined use of shoulder extension in the prone position and downward pulling. Copyright © 2013 Elsevier Ltd. All rights reserved.
Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan
2018-05-14
Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.
ERIC Educational Resources Information Center
Stokes, Stephanie F.; Lau, Jessica Tse-Kay; Ciocca, Valter
2002-01-01
This study examined the interaction of ambient frequency and feature complexity in the diphthong errors produced by 13 Cantonese-speaking children with phonological disorders. Perceptual analysis of 611 diphthongs identified those most frequently and least frequently in error. Suggested treatment guidelines include consideration of three factors:…
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Human Research Program Space Human Factors Engineering (SHFE) Standing Review Panel (SRP)
NASA Technical Reports Server (NTRS)
Wichansky, Anna; Badler, Norman; Butler, Keith; Cummings, Mary; DeLucia, Patricia; Endsley, Mica; Scholtz, Jean
2009-01-01
The Space Human Factors Engineering (SHFE) Standing Review Panel (SRP) evaluated 22 gaps and 39 tasks in the three risk areas assigned to the SHFE Project. The area where tasks were best designed to close the gaps and the fewest gaps were left out was the Risk of Reduced Safety and Efficiency dire to Inadequate Design of Vehicle, Environment, Tools or Equipment. The areas where there were more issues with gaps and tasks, including poor or inadequate fit of tasks to gaps and missing gaps, were Risk of Errors due to Poor Task Design and Risk of Error due to Inadequate Information. One risk, the Risk of Errors due to Inappropriate Levels of Trust in Automation, should be added. If astronauts trust automation too much in areas where it should not be trusted, but rather tempered with human judgment and decision making, they will incur errors. Conversely, if they do not trust automation when it should be trusted, as in cases where it can sense aspects of the environment such as radiation levels or distances in space, they will also incur errors. This will be a larger risk when astronauts are less able to rely on human mission control experts and are out of touch, far away, and on their own. The SRP also identified 11 new gaps and five new tasks. Although the SRP had an extremely large quantity of reading material prior to and during the meeting, we still did not feel we had an overview of the activities and tasks the astronauts would be performing in exploration missions. Without a detailed task analysis and taxonomy of activities the humans would be engaged in, we felt it was impossible to know whether the gaps and tasks were really sufficient to insure human safety, performance, and comfort in the exploration missions. The SRP had difficulty evaluating many of the gaps and tasks that were not as quantitative as those related to concrete physical danger such as excessive noise and vibration. Often the research tasks for cognitive risks that accompany poor task or information design addressed only part, but not all, of the gaps they were programmed to fill. In fact the tasks outlined will not close the gap but only scratch the surface in many cases. In other cases, the gap was written too broadly, and really should be restated in a more constrained way that can be addressed by a well-organized and complementary set of tasks. In many cases, the research results should be turned into guidelines for design. However, it was not clear whether the researchers or another group would construct and deliver these guidelines.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Path-following in model predictive rollover prevention using front steering and braking
NASA Astrophysics Data System (ADS)
Ghazali, Mohammad; Durali, Mohammad; Salarieh, Hassan
2017-01-01
In this paper vehicle path-following in the presence of rollover risk is investigated. Vehicles with high centre of mass are prone to roll instability. Untripped rollover risk is increased in high centre of gravity vehicles and high-friction road condition. Researches introduce strategies to handle the short-duration rollover condition. In these researches, however, trajectory tracking is affected and not thoroughly investigated. This paper puts stress on tracking error from rollover prevention. A lower level model predictive front steering controller is adopted to deal with rollover and tracking error as a priority sequence. A brake control is included in lower level controller which directly obeys an upper level controller (ULC) command. The ULC manages vehicle speed regarding primarily tracking error. Simulation results show that the proposed control framework maintains roll stability while tracking error is confined to predefined error limit.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
USDA-ARS?s Scientific Manuscript database
Food preparation skills may encourage healthy eating. Traditional assessment of child food preparation employs self- or parent proxy-reporting methods, which are prone to error. The eButton is a wearable all-day camera that has promise as an objective, passive method for measuring child food prepara...
ERIC Educational Resources Information Center
Farri, Oladimeji Feyisetan
2012-01-01
Large quantities of redundant clinical data are usually transferred from one clinical document to another, making the review of such documents cognitively burdensome and potentially error-prone. Inadequate designs of electronic health record (EHR) clinical document user interfaces probably contribute to the difficulties clinicians experience while…
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
ATS-PD: An Adaptive Testing System for Psychological Disorders
ERIC Educational Resources Information Center
Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio
2017-01-01
The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…
Towards New Multiplatform Hybrid Online Laboratory Models
ERIC Educational Resources Information Center
Rodriguez-Gil, Luis; García-Zubia, Javier; Orduña, Pablo; López-de-Ipiña, Diego
2017-01-01
Online laboratories have traditionally been split between virtual labs, with simulated components; and remote labs, with real components. The former tend to provide less realism but to be easily scalable and less expensive to maintain, while the latter are fully real but tend to require a higher maintenance effort and be more error-prone. This…
ERIC Educational Resources Information Center
Ruller, Roberto; Silva-Rocha, Rafael; Silva, Artur; Schneider, Maria Paula Cruz; Ward, Richard John
2011-01-01
Protein engineering is a powerful tool, which correlates protein structure with specific functions, both in applied biotechnology and in basic research. Here, we present a practical teaching course for engineering the green fluorescent protein (GFP) from "Aequorea victoria" by a random mutagenesis strategy using error-prone polymerase…
Accuracy of an IFSAR-derived digital terrain model under a conifer forest canopy.
Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey
2005-01-01
Accurate digital terrain models (DTMs) are necessary for a variety of forest resource management applications, including watershed management, timber harvest planning, and fire management. Traditional methods for acquiring topographic data typically rely on aerial photogrammetry, where measurement of the terrain surface below forest canopy is difficult and error prone...
Verhey, Theodore B; Castellanos, Mildred; Chaconas, George
2018-05-29
The Lyme disease spirochete, Borrelia burgdorferi, uses antigenic variation as a strategy to evade the host's acquired immune response. New variants of surface-localized VlsE are generated efficiently by unidirectional recombination from 15 unexpressed vls cassettes into the vlsE locus. Using algorithms to analyze switching from vlsE sequencing data, we characterize a population of over 45,000 inferred recombination events generated during mouse infection. We present evidence for clustering of these recombination events within the population and along the vlsE gene, a role for the direct repeats flanking the variable region in vlsE, and the importance of sequence homology in determining the location of recombination, despite RecA's dispensability. Finally, we report that non-templated sequence variation is strongly associated with recombinational switching and occurs predominantly at the 5' end of conversion tracts. This likely results from an error-prone repair mechanism operational during recombinational switching that elevates the mutation rate > 5,000-fold in switched regions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Computationally mapping sequence space to understand evolutionary protein engineering.
Armstrong, Kathryn A; Tidor, Bruce
2008-01-01
Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.
Emerging and reemerging epidemic-prone diseases among settling nomadic pastoralists in Uganda.
Cummings, Matthew J; Wamala, Joseph F; Komakech, Innocent; Malimbo, Mugagga; Lukwago, Luswa
2014-09-01
Epidemic-prone diseases have traditionally been uncommon among nomadic pastoralists as mobility allows already dispersed populations to migrate away from epidemic threats. In the Karamoja region of Uganda, nomadic pastoralists are transitioning to an increasingly settled lifestyle due to cattle raiding and associated civil insecurity. In attempts to reduce conflict in the region, the Ugandan government has instituted disarmament campaigns and encouraged sedentism in place of mobility. In Karamoja, this transition to sedentism has contributed to the emergence and reemergence of epidemic-prone diseases such as cholera, hepatitis E, yellow fever, and meningococcal meningitis. The incidence of these diseases remains difficult to measure and several challenges exist to their control. Challenges to communicable disease surveillance and control among settling nomadic pastoralists are related to nomadic mobility, remote geography, vaccination and immunity, and poor sanitation and safe water access. In addition to improving gaps in infrastructure, attracting well-trained government health workers to Karamoja and similar areas with longstanding human resource limitations is critical to address the challenges to epidemic-prone disease surveillance and control among settling nomadic pastoralists. In conjunction with government health workers, community health teams provide a sustainable method by which public health programs can be improved in the austere environments inhabited by mobile and settling pastoralists. Copyright © 2014 Elsevier B.V. All rights reserved.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
2010-01-01
Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline. PMID:20082700
Ongenae, Femke; De Backere, Femke; Steurbaut, Kristof; Colpaert, Kirsten; Kerckhove, Wannes; Decruyenaere, Johan; De Turck, Filip
2010-01-18
Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.
[Care plan for patients in prone decubitus. An experience from practice].
Oliva Torras, E; Subirana Casacuberta, M; Sebastià, M P; Jover Sancho, C; Solà Solé, N
1995-01-01
Offering a specific integral attention to patients with SDRA in prone decubitus positions makes us establish a performance plan with the aim to know the problems derived from the change in position, the time staying in prone decubitus and to standardize a care plan. We review the clinic records of the patients admitted in our unit from March '93 to March '95 who were positioned in prone decubitus. Taking as a base the nursing care model of V. Henderson and the taxonomy of NANDA, we analyse the needs which have been altered, and determine the nursing diagnosis, complications and most frequent interdependent problems establishing the aim to accomplish, planning the performance and rationalization. Five patients were positioned in prone decubitus before planning the performance and four more afterwards. All the patients tolerated SNG diet keeping a correct bowel transit. One patient showed an ulcera at frontal level. There were neither comeal ulceras nor alterations in the oral mucossa. The vascular accesses remained permeable. DP caused facial and periorbital edema in all the patients. We did not observe any increase in the amount of bronchial secretions. The eight patients who tolerated the change in position stayed in prone decubitus for an average of 77 hours, with a range of 10 to 216 hours. Four patients were discharged from the hospital, two of whom showed movility alterations, independently of the time staying in prone decubitus. We state explicitly the nursing care, determine five nursing diagnosis, one problem and seven interdependent complications. Establishing the nursing care from the experience and review of the records has allowed us to be more specific and objective. Standardizing the specific care plans makes the nursing care easier when dealing with real problems as well as with the care of complications derived from this situation.
Error-prone bypass of O6-methylguanine by DNA polymerase of Pseudomonas aeruginosa phage PaP1.
Gu, Shiling; Xiong, Jingyuan; Shi, Ying; You, Jia; Zou, Zhenyu; Liu, Xiaoying; Zhang, Huidong
2017-09-01
O 6 -Methylguanine (O 6 -MeG) is highly mutagenic and is commonly found in DNA exposed to methylating agents, generally leads to G:C to A:T mutagenesis. To study DNA replication encountering O 6 -MeG by the DNA polymerase (gp90) of P. aeruginosa phage PaP1, we analyzed steady-state and pre-steady-state kinetics of nucleotide incorporation opposite O 6 -MeG by gp90 exo - . O 6 -MeG partially inhibited full-length extension by gp90 exo - . O 6 -MeG greatly reduces dNTP incorporation efficiency, resulting in 67-fold preferential error-prone incorporation of dTTP than dCTP. Gp90 exo - extends beyond T:O 6 -MeG 2-fold more efficiently than C:O 6 -MeG. Incorporation of dCTP opposite G and incorporation of dCTP or dTTP opposite O 6 -MeG show fast burst phases. The pre-steady-state incorporation efficiency (k pol /K d,dNTP ) is decreased in the order of dCTP:G>dTTP:O 6 -MeG>dCTP:O 6 -MeG. The presence of O 6 -MeG at template does not affect the binding affinity of polymerase to DNA but it weakened their binding in the presence of dCTP and Mg 2+ . Misincorporation of dTTP opposite O 6 -MeG further weakens the binding affinity of polymerase to DNA. The priority of dTTP incorporation opposite O 6 -MeG is originated from the fact that dTTP can induce a faster conformational change step and a faster chemical step than dCTP. This study reveals that gp90 bypasses O 6 -MeG in an error-prone manner and provides further understanding in DNA replication encountering mutagenic alkylation DNA damage for P. aeruginosa phage PaP1. Copyright © 2017 Elsevier B.V. All rights reserved.
Nikolaitchik, Olga A.; Burdick, Ryan C.; Gorelick, Robert J.; Keele, Brandon F.; Hu, Wei-Shau; Pathak, Vinay K.
2016-01-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10−5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10−21 and1 × 10−11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication. PMID:27186986
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Eyre-Walker, Adam; Stoletzki, Nina
2013-10-01
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
Delviks-Frankenberry, Krista A; Nikolaitchik, Olga A; Burdick, Ryan C; Gorelick, Robert J; Keele, Brandon F; Hu, Wei-Shau; Pathak, Vinay K
2016-05-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10-5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10-21 and1 × 10-11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication.
Eyre-Walker, Adam; Stoletzki, Nina
2013-01-01
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative. PMID:24115908
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A; Foster, J; Chu, W
2015-06-15
Purpose: Many cancer centers treat colorectal patients in the prone position on a belly board to minimize dose to the small bowel. That may potentially Result in patient setup instability with corresponding impact on dose delivery accuracy for highly conformal techniques such as IMRT/VMAT. Two aims of this work are 1) to investigate setup accuracy of rectum patients treated in the prone position on a belly board using CBCT and 2) to evaluate dosimetric impact on bladder and small bowel of treating rectum patients in supine vs. prone position. Methods: For the setup accuracy study, 10 patients were selected. Weeklymore » CBCTs were acquired and matched to bone. The CBCT-determined shifts were recorded. For the dosimetric study, 7 prone-setup patients and 7 supine-setup patients were randomly selected from our clinical database. Various clinically relevant dose volume histogram values were recorded for the small bowel and bladder. Results: The CBCT-determined rotational shifts had a wide variation. For the dataset acquired at the time of this writing, the ranges of rotational setup errors for pitch, roll, and yaw were [−3.6° 4.7°], [−4.3° 3.2°], and [−1.4° 1.4°]. For the dosimetric study: the small bowel V(45Gy) and mean dose for the prone position was 5.6±12.1% and 18.4±6.2Gy (ranges indicate standard deviations); for the supine position the corresponding dose values were 12.9±15.8% and 24.7±8.8Gy. For the bladder, the V(30Gy) and mean dose for prone position were 68.7±12.7% and 38.4±3.3Gy; for supine position these dose values were 77.1±13.7% and 40.7±3.1Gy. Conclusion: There is evidence of significant rotational instability in the prone position. The OAR dosimetry study indicates that there are some patients that may still benefit from the prone position, though many patients can be safely treated supine.« less
Commission errors of active intentions: the roles of aging, cognitive load, and practice.
Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten
2015-01-01
Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.
NASA Technical Reports Server (NTRS)
Long, E. R., Jr.
1986-01-01
Effects of specimen preparation on measured values of an acrylic's electomagnetic properties at X-band microwave frequencies, TE sub 1,0 mode, utilizing an automatic network analyzer have been studied. For 1 percent or less error, a gap between the specimen edge and the 0.901-in. wall of the specimen holder was the most significant parameter. The gap had to be less than 0.002 in. The thickness variation and alignment errors in the direction parallel to the 0.901-in. wall were equally second most significant and had to be less than 1 degree. Errors in the measurement f the thickness were third most significant. They had to be less than 3 percent. The following parameters caused errors of 1 percent or less: ratios of specimen-holder thicknesses of more than 15 percent, gaps between the specimen edge and the 0.401-in. wall less than 0.045 in., position errors less than 15 percent, surface roughness, hickness variation in the direction parallel to the 0.401-in. wall less than 35 percent, and specimen alignment in the direction parallel to the 0.401-in. wall mass than 5 degrees.
Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method
NASA Astrophysics Data System (ADS)
Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2009-10-01
Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.
Biases in comparative analyses of extinction risk: mind the gap.
González-Suárez, Manuela; Lucas, Pablo M; Revilla, Eloy
2012-11-01
1. Comparative analyses are used to address the key question of what makes a species more prone to extinction by exploring the links between vulnerability and intrinsic species' traits and/or extrinsic factors. This approach requires comprehensive species data but information is rarely available for all species of interest. As a result comparative analyses often rely on subsets of relatively few species that are assumed to be representative samples of the overall studied group. 2. Our study challenges this assumption and quantifies the taxonomic, spatial, and data type biases associated with the quantity of data available for 5415 mammalian species using the freely available life-history database PanTHERIA. 3. Moreover, we explore how existing biases influence results of comparative analyses of extinction risk by using subsets of data that attempt to correct for detected biases. In particular, we focus on links between four species' traits commonly linked to vulnerability (distribution range area, adult body mass, population density and gestation length) and conduct univariate and multivariate analyses to understand how biases affect model predictions. 4. Our results show important biases in data availability with c.22% of mammals completely lacking data. Missing data, which appear to be not missing at random, occur frequently in all traits (14-99% of cases missing). Data availability is explained by intrinsic traits, with larger mammals occupying bigger range areas being the best studied. Importantly, we find that existing biases affect the results of comparative analyses by overestimating the risk of extinction and changing which traits are identified as important predictors. 5. Our results raise concerns over our ability to draw general conclusions regarding what makes a species more prone to extinction. Missing data represent a prevalent problem in comparative analyses, and unfortunately, because data are not missing at random, conventional approaches to fill data gaps, are not valid or present important challenges. These results show the importance of making appropriate inferences from comparative analyses by focusing on the subset of species for which data are available. Ultimately, addressing the data bias problem requires greater investment in data collection and dissemination, as well as the development of methodological approaches to effectively correct existing biases. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Skills, rules and knowledge in aircraft maintenance: errors in context
NASA Technical Reports Server (NTRS)
Hobbs, Alan; Williamson, Ann
2002-01-01
Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
Foot Structure in Japanese Speech Errors: Normal vs. Pathological
ERIC Educational Resources Information Center
Miyakoda, Haruko
2008-01-01
Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…
Crying in Middle Childhood: A Report on Gender Differences.
Jellesma, Francine C; Vingerhoets, Ad J J M
2012-10-01
The aims of this study were (1) to confirm gender differences in crying in middle childhood and (2) to identify factors that may explain why girls cry more than boys in a Dutch sample (North Holland and Utrecht). We examined 186 children's (age: 9-13 years) self-reports on crying, catharsis, seeking support for feelings, and internalizing feelings. Girls reported a greater crying frequency and crying proneness, and more emotional and physical catharsis after crying. In addition, they more frequently sought support for feelings and more often experienced sadness and somatic complaints than boys. Seeking help for negative feelings and the experience of sadness and somatic complaints were positively associated with crying frequency and crying proneness. Emotional catharsis was positively linked to crying proneness. Support was found for the potential mediating role of sadness and somatic complaints with respect to the gender difference in crying frequency and for the potential mediating role of emotional catharsis and somatic complaints for crying proneness. This study demonstrates that gender differences in crying frequency already exist in middle childhood and the findings suggest a linkage between these gender differences in crying and psychosocial factors.
A Semantic Analysis of XML Schema Matching for B2B Systems Integration
ERIC Educational Resources Information Center
Kim, Jaewook
2011-01-01
One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…
A Logically Centralized Approach for Control and Management of Large Computer Networks
ERIC Educational Resources Information Center
Iqbal, Hammad A.
2012-01-01
Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…
ERIC Educational Resources Information Center
Dougherty, Michael R.; Sprenger, Amber
2006-01-01
This article introduces 2 new sources of bias in probability judgment, discrimination failure and inhibition failure, which are conceptualized as arising from an interaction between error prone memory processes and a support theory like comparison process. Both sources of bias stem from the influence of irrelevant information on participants'…
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
An Evaluation of a New Printing Instrument to Aid in Identifying the Failure-prone Preschool Child.
ERIC Educational Resources Information Center
Simner, Marvin L.
Involving 619 preschool children, a longitudinal investigation evaluated a new test for identifying preschool children who produce an excessive number of form errors in printing. All children participating were fluent in English and were in the appropriate grades for their ages, either pre-kindergarten or kindergarten, when they were given the…
Computer programs for optical dendrometer measurements of standing tree profiles
Jacob R. Beard; Thomas G. Matney; Emily B. Schultz
2015-01-01
Tree profile equations are effective volume predictors. Diameter data for building these equations are collected from felled trees using diameter tapes and calipers or from standing trees using optical dendrometers. Developing and implementing a profile function from the collected data is a tedious and error prone task. This study created a computer program, Profile...
Conducting Web-Based Surveys. ERIC Digest.
ERIC Educational Resources Information Center
Solomon, David J.
Web-based surveying is very attractive for many reasons, including reducing the time and cost of conducting a survey and avoiding the often error prone and tedious task of data entry. At this time, Web-based surveys should still be used with caution. The biggest concern at present is coverage bias or bias resulting from sampled people either not…
Errors in the ultrasound diagnosis of the kidneys, ureters and urinary bladder
Wieczorek, Andrzej Paweł; Tyloch, Janusz F.
2013-01-01
The article presents the most frequent errors made in the ultrasound diagnosis of the urinary system. They usually result from improper technique of ultrasound examination or its erroneous interpretation. Such errors are frequent effects of insufficient experience of the ultrasonographer, inadequate class of the scanner, insufficient knowledge of its operation as well as of wrong preparation of patients, their constitution, severe condition and the lack of cooperation during the examination. The reasons for misinterpretations of ultrasound images of the urinary system may lie in a large polymorphism of the kidney (defects and developmental variants) and may result from improper access to the organ as well as from the presence of artefacts. Errors may also result from the lack of knowledge concerning clinical and laboratory data. Moreover, mistakes in ultrasound diagnosis of the urinary system are frequently related to the lack of knowledge of the management algorithms and diagnostic possibilities of other imaging modalities. The paper lists errors in ultrasound diagnosis of the urinary system divided into: errors resulting from improper technique of examination, artefacts caused by incorrect preparation of patients for the examination or their constitution and errors resulting from misinterpretation of ultrasound images of the kidneys (such as their number, size, fluid spaces, pathological lesions and others), ureters and urinary bladder. Each physician performing kidney or bladder ultrasound examination should possess the knowledge of the most frequent errors and their causes which might help to avoid them. PMID:26674139
Beuret, Pascal; Carton, Marie-Jose; Nourdine, Karim; Kaaki, Mahmoud; Tramoni, Gerard; Ducreux, Jean-Claude
2002-05-01
Comatose patients frequently exhibit pulmonary function worsening, especially in cases of pulmonary infection. It appears to have a deleterious effect on neurologic outcome. We therefore conducted a randomized trial to determine whether daily prone positioning would prevent lung worsening in these patients. Prospective, randomized, controlled study. Sixteen-bed intensive care unit. Fifty-one patients who required invasive mechanical ventilation because of coma with Glascow coma scores of 9 or less. In the prone position (PP) group: prone positioning for 4 h once daily until the patients could get up to sit in an armchair; in the supine position (SP) group: supine positioning. The primary end point was the incidence of lung worsening defined by an increase in the Lung Injury Score of at least 1 point since the time of randomization. The secondary end point was the incidence of ventilator-associated pneumonia (VAP). A total of 25 patients were randomly assigned to the PP group and 26 patients to the SP group. The characteristics of the patients from the two groups were similar at randomization. The incidence of lung worsening was lower in the PP group (12%) than in the SP group (50%) ( p=0.003). The incidence of VAP was 20% in the PP group and 38.4% in the SP group ( p=0.14). There was no serious complication attributable to prone positioning, however, there was a significant increase of intracranial pressure in the PP. In a selected population of comatose ventilated patients, daily prone positioning reduced the incidence of lung worsening.
Long-term drought sensitivity of trees in second-growth forests in a humid region
Neil Pederson; Kacie Tackett; Ryan W. McEwan; Stacy Clark; Adrienne Cooper; Glade Brosi; Ray Eaton; R. Drew Stockwell
2012-01-01
Classical field methods of reconstructing drought using tree rings in humid, temperate regions typically target old trees from drought-prone sites. This approach limits investigators to a handful of species and excludes large amounts of data that might be useful, especially for coverage gaps in large-scale networks. By sampling in more âtypicalâ forests, network...
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.
NASA Astrophysics Data System (ADS)
Su, Tengfei
2018-04-01
In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.
Kranz, J; Sommer, K-J; Steffens, J
2014-05-01
Patient safety and risk/complication management rank among the current megatrends in modern medicine, which has undoubtedly become more complex. In time-critical, error-prone and difficult situations, which often occur repeatedly in everyday clinical practice, guidelines are inappropriate for acting rapidly and intelligently. With the establishment and consistent use of standard operating procedures like in commercial aviation, a possible strategic approach is available. These medical aids to decision-making - quick reference cards - are short, optimized instructions that enable a standardized procedure in case of medical claims.
Lobo, Elena; Dalling, James W
2014-03-07
Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition.
Skull registration for prone patient position using tracked ultrasound
NASA Astrophysics Data System (ADS)
Underwood, Grace; Ungi, Tamas; Baum, Zachary; Lasso, Andras; Kronreif, Gernot; Fichtinger, Gabor
2017-03-01
PURPOSE: Tracked navigation has become prevalent in neurosurgery. Problems with registration of a patient and a preoperative image arise when the patient is in a prone position. Surfaces accessible to optical tracking on the back of the head are unreliable for registration. We investigated the accuracy of surface-based registration using points accessible through tracked ultrasound. Using ultrasound allows access to bone surfaces that are not available through optical tracking. Tracked ultrasound could eliminate the need to work (i) under the table for registration and (ii) adjust the tracker between surgery and registration. In addition, tracked ultrasound could provide a non-invasive method in comparison to an alternative method of registration involving screw implantation. METHODS: A phantom study was performed to test the feasibility of tracked ultrasound for registration. An initial registration was performed to partially align the pre-operative computer tomography data and skull phantom. The initial registration was performed by an anatomical landmark registration. Surface points accessible by tracked ultrasound were collected and used to perform an Iterative Closest Point Algorithm. RESULTS: When the surface registration was compared to a ground truth landmark registration, the average TRE was found to be 1.6+/-0.1mm and the average distance of points off the skull surface was 0.6+/-0.1mm. CONCLUSION: The use of tracked ultrasound is feasible for registration of patients in prone position and eliminates the need to perform registration under the table. The translational component of error found was minimal. Therefore, the amount of TRE in registration is due to a rotational component of error.
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Zakrzewski, Maciej; Wojtak, Jerzy; Mazurkiewicz, Hanna; Grygalewicz, Jacek
2005-01-01
To establish the occurrence of SIDS risk factors (including 'removable' ones) and the incidence of the ecg long QT interval (accepted as a risk factor) and their influence upon infants development and morbidity. A group of 98 infants from normal birth at term to the end of first year of life was observed. The data sources were as follows: 1) a questionnaire filled by mothers before discharge front maternity ward, 2) records of four consecutive medical examinations (including ecg records) performed on 3rd day and 3rd, 6th and 12th month of life. Chi-Square test and Fisher test were used. The most often identified risk factors were: prone sleeping position of infant (60.2%), environmental and maternal tobacco smoking (40.8%) and bed sharing practices (32.6%). A significant but transient signs of delay in psychomotor development (in motor zone) as well as more frequent respiratory tract infections in infants sleeping prone were noted. There were no deaths in the observed group neither cases of long QT interval. 1) the most frequently occurring SIDS risk factors are: environmental tobacco smoking, infant prone sleeping and bed sharing, 2) these inappropriate nursing practices and improper habits of adult family members known as a 'removable' SIDS risk factors have a bad effect on infant health and development, 3) identification of SIDS risk factors in an infant does not predict crib death.
Gap filling strategies and error in estimating annual soil respiration
USDA-ARS?s Scientific Manuscript database
Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.
El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher
2016-11-01
The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.
Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A
2018-06-19
Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.
Learning Disabilities and Conductive Hearing Loss Involving Otitis Media.
ERIC Educational Resources Information Center
Reichman, Julie; Healey, William C.
1983-01-01
A review of research on the relationship of otitis media (ear infection) and learning/language/hearing disorders revealed that incidence of otitis media was twice as common in learning disabled as nonLD students; and that, in general, otitis-prone children scored below controls with frequent evidence of performance deficits. (CL)
Spokane, WA is prone to frequent particulate pollution episodes due to dust storms, biomass burning, and periods of stagnant meteorological conditions. Spokane is the location of a long-term study examining the association between health effects and chemical or physical consti...
Automatic Generation of Data Types for Classification of Deep Web Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngu, A H; Buttler, D J; Critchlow, T J
2005-02-14
A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less
A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks.
Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena
2016-06-13
In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens' quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.
A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks
Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena
2016-01-01
In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens’ quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%. PMID:27304957
Corrected score estimation in the proportional hazards model with misclassified discrete covariates
Zucker, David M.; Spiegelman, Donna
2013-01-01
SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700
Intransparent German number words complicate transcoding - a translingual comparison with Japanese.
Moeller, Korbinian; Zuber, Julia; Olsen, Naoko; Nuerk, Hans-Christoph; Willmes, Klaus
2015-01-01
Superior early numerical competencies of children in several Asian countries have (amongst others) been attributed to the higher transparency of their number word systems. Here, we directly investigated this claim by evaluating whether Japanese children's transcoding performance when writing numbers to dictation (e.g., "twenty five" → 25) was less error prone than that of German-speaking children - both in general as well as when considering language-specific attributes of the German number word system such as the inversion property, in particular. In line with this hypothesis we observed that German-speaking children committed more transcoding errors in general than their Japanese peers. Moreover, their error pattern reflected the specific inversion intransparency of the German number-word system. Inversion errors in transcoding represented the most prominent error category in German-speaking children, but were almost absent in Japanese-speaking children. We conclude that the less transparent German number-word system complicates the acquisition of the correspondence between symbolic Arabic numbers and their respective verbal number words.
ERIC Educational Resources Information Center
Rast, Philippe; Zimprich, Daniel; Van Boxtel, Martin; Jolles, Jellemer
2009-01-01
The Cognitive Failures Questionnaire (CFQ) is designed to assess a person's proneness to committing cognitive slips and errors in the completion of everyday tasks. Although the CFQ is a widely used instrument, its factor structure remains an issue of scientific debate. The present study used data of a representative sample (N = 1,303, 24-83 years…
Ground-based digital imagery for tree stem analysis
Neil Clark; Daniel L. Schmoldt; Randolph H. Wynne; Matthew F. Winn; Philip A. Araman
2000-01-01
In the USA, a subset of permanent forest sample plots within each geographic region are intensively measured to obtain estimates of tree volume and products. The detailed field measurements required for this type of sampling are both time consuming and error prone. We are attempting to reduce both of these factors with the aid of a commercially-available solid-state...
USDA-ARS?s Scientific Manuscript database
We investigated measurement error in the self-reported diets of US Hispanics/Latinos, who are prone to obesity and related comorbidities, by background (Central American, Cuban, Dominican, Mexican, Puerto Rican, and South American) in 2010–2012. In 477 participants aged 18–74 years, doubly labeled w...
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Structure-Function Analysis of Chloroplast Proteins via Random Mutagenesis Using Error-Prone PCR.
Dumas, Louis; Zito, Francesca; Auroy, Pascaline; Johnson, Xenie; Peltier, Gilles; Alric, Jean
2018-06-01
Site-directed mutagenesis of chloroplast genes was developed three decades ago and has greatly advanced the field of photosynthesis research. Here, we describe a new approach for generating random chloroplast gene mutants that combines error-prone polymerase chain reaction of a gene of interest with chloroplast complementation of the knockout Chlamydomonas reinhardtii mutant. As a proof of concept, we targeted a 300-bp sequence of the petD gene that encodes subunit IV of the thylakoid membrane-bound cytochrome b 6 f complex. By sequencing chloroplast transformants, we revealed 149 mutations in the 300-bp target petD sequence that resulted in 92 amino acid substitutions in the 100-residue target subunit IV sequence. Our results show that this method is suited to the study of highly hydrophobic, multisubunit, and chloroplast-encoded proteins containing cofactors such as hemes, iron-sulfur clusters, and chlorophyll pigments. Moreover, we show that mutant screening and sequencing can be used to study photosynthetic mechanisms or to probe the mutational robustness of chloroplast-encoded proteins, and we propose that this method is a valuable tool for the directed evolution of enzymes in the chloroplast. © 2018 American Society of Plant Biologists. All rights reserved.
Mao, Chanjuan; Xie, Hongjie; Chen, Shiguo; Valverde, Bernal E; Qiang, Sheng
2017-09-01
Liriope spicata (Thunb.) Lour has a unique LsEPSPS structure contributing to the highest-ever-recognized natural glyphosate tolerance. The transformed LsEPSPS confers increased glyphosate resistance to E. coli and A. thaliana. However, the increased glyphosate-resistance level is not high enough to be of commercial value. Therefore, LsEPSPS was subjected to error-prone PCR to screen mutant EPSPS genes capable of endowing higher resistance levels. A mutant designated as ELs-EPSPS having five mutated amino acids (37Val, 67Asn, 277Ser, 351Gly and 422Gly) was selected for its ability to confer improved resistance to glyphosate. Expression of ELs-EPSPS in recombinant E. coli BL21 (DE3) strains enhanced resistance to glyphosate in comparison to both the LsEPSPS-transformed and -untransformed controls. Furthermore, transgenic ELs-EPSPS A. thaliana was about 5.4 fold and 2-fold resistance to glyphosate compared with the wild-type and the Ls-EPSPS-transgenic plants, respectively. Therefore, the mutated ELs-EPSPS gene has potential value for has potential for the development of glyphosate-resistant crops. Copyright © 2016 Elsevier Inc. All rights reserved.
Aboussekhra, A; Chanet, R; Zgaga, Z; Cassier-Chauvat, C; Heude, M; Fabre, F
1989-09-25
A new type of radiation-sensitive mutant of S. cerevisiae is described. The recessive radH mutation sensitizes to the lethal effect of UV radiations haploids in the G1 but not in the G2 mitotic phase. Homozygous diploids are as sensitive as G1 haploids. The UV-induced mutagenesis is depressed, while the induction of gene conversion is increased. The mutation is believed to channel the repair of lesions engaged in the mutagenic pathway into a recombination process, successful if the events involve sister-chromatids but lethal if they involve homologous chromosomes. The sequence of the RADH gene reveals that it may code for a DNA helicase, with a Mr of 134 kDa. All the consensus domains of known DNA helicases are present. Besides these consensus regions, strong homologies with the Rep and UvrD helicases of E. coli were found. The RadH putative helicase appears to belong to the set of proteins involved in the error-prone repair mechanism, at least for UV-induced lesions, and could act in coordination with the Rev3 error-prone DNA polymerase.
Kim, Jinsook; Song, Insil; Jo, Ara; Shin, Joo-Ho; Cho, Hana; Eoff, Robert L; Guengerich, F Peter; Choi, Jeong-Yun
2014-10-20
DNA polymerase (pol) ι is the most error-prone among the Y-family polymerases that participate in translesion synthesis (TLS). Pol ι can bypass various DNA lesions, e.g., N(2)-ethyl(Et)G, O(6)-methyl(Me)G, 8-oxo-7,8-dihydroguanine (8-oxoG), and an abasic site, though frequently with low fidelity. We assessed the biochemical effects of six reported genetic variations of human pol ι on its TLS properties, using the recombinant pol ι (residues 1-445) proteins and DNA templates containing a G, N(2)-EtG, O(6)-MeG, 8-oxoG, or abasic site. The Δ1-25 variant, which is the N-terminal truncation of 25 residues resulting from an initiation codon variant (c.3G > A) and also is the formerly misassigned wild-type, exhibited considerably higher polymerase activity than wild-type with Mg(2+) (but not with Mn(2+)), coinciding with its steady-state kinetic data showing a ∼10-fold increase in kcat/Km for nucleotide incorporation opposite templates (only with Mg(2+)). The R96G variant, which lacks a R96 residue known to interact with the incoming nucleotide, lost much of its polymerase activity, consistent with the kinetic data displaying 5- to 72-fold decreases in kcat/Km for nucleotide incorporation opposite templates either with Mg(2+) or Mn(2+), except for that opposite N(2)-EtG with Mn(2+) (showing a 9-fold increase for dCTP incorporation). The Δ1-25 variant bound DNA 20- to 29-fold more tightly than wild-type (with Mg(2+)), but the R96G variant bound DNA 2-fold less tightly than wild-type. The DNA-binding affinity of wild-type, but not of the Δ1-25 variant, was ∼7-fold stronger with 0.15 mM Mn(2+) than with Mg(2+). The results indicate that the R96G variation severely impairs most of the Mg(2+)- and Mn(2+)-dependent TLS abilities of pol ι, whereas the Δ1-25 variation selectively and substantially enhances the Mg(2+)-dependent TLS capability of pol ι, emphasizing the potential translational importance of these pol ι genetic variations, e.g., individual differences in TLS, mutation, and cancer susceptibility to genotoxic carcinogens.
2015-01-01
DNA polymerase (pol) ι is the most error-prone among the Y-family polymerases that participate in translesion synthesis (TLS). Pol ι can bypass various DNA lesions, e.g., N2-ethyl(Et)G, O6-methyl(Me)G, 8-oxo-7,8-dihydroguanine (8-oxoG), and an abasic site, though frequently with low fidelity. We assessed the biochemical effects of six reported genetic variations of human pol ι on its TLS properties, using the recombinant pol ι (residues 1–445) proteins and DNA templates containing a G, N2-EtG, O6-MeG, 8-oxoG, or abasic site. The Δ1–25 variant, which is the N-terminal truncation of 25 residues resulting from an initiation codon variant (c.3G > A) and also is the formerly misassigned wild-type, exhibited considerably higher polymerase activity than wild-type with Mg2+ (but not with Mn2+), coinciding with its steady-state kinetic data showing a ∼10-fold increase in kcat/Km for nucleotide incorporation opposite templates (only with Mg2+). The R96G variant, which lacks a R96 residue known to interact with the incoming nucleotide, lost much of its polymerase activity, consistent with the kinetic data displaying 5- to 72-fold decreases in kcat/Km for nucleotide incorporation opposite templates either with Mg2+ or Mn2+, except for that opposite N2-EtG with Mn2+ (showing a 9-fold increase for dCTP incorporation). The Δ1–25 variant bound DNA 20- to 29-fold more tightly than wild-type (with Mg2+), but the R96G variant bound DNA 2-fold less tightly than wild-type. The DNA-binding affinity of wild-type, but not of the Δ1–25 variant, was ∼7-fold stronger with 0.15 mM Mn2+ than with Mg2+. The results indicate that the R96G variation severely impairs most of the Mg2+- and Mn2+-dependent TLS abilities of pol ι, whereas the Δ1–25 variation selectively and substantially enhances the Mg2+-dependent TLS capability of pol ι, emphasizing the potential translational importance of these pol ι genetic variations, e.g., individual differences in TLS, mutation, and cancer susceptibility to genotoxic carcinogens. PMID:25162224
Sale, Julian E.; Batters, Christopher; Edmunds, Charlotte E.; Phillips, Lara G.; Simpson, Laura J.; Szüts, Dávid
2008-01-01
By temporarily deferring the repair of DNA lesions encountered during replication, the bypass of DNA damage is critical to the ability of cells to withstand genomic insults. Damage bypass can be achieved either by recombinational mechanisms that are generally accurate or by a process called translesion synthesis. Translesion synthesis involves replacing the stalled replicative polymerase with one of a number of specialized DNA polymerases whose active sites are able to tolerate a distorted or damaged DNA template. While this property allows the translesion polymerases to synthesize across damaged bases, it does so with the trade-off of an increased mutation rate. The deployment of these enzymes must therefore be carefully regulated. In addition to their important role in general DNA damage tolerance and mutagenesis, the translesion polymerases play a crucial role in converting the products of activation induced deaminase-catalysed cytidine deamination to mutations during immunoglobulin gene somatic hypermutation. In this paper, we specifically consider the control of translesion synthesis in the context of the timing of lesion bypass relative to replication fork progression and arrest at sites of DNA damage. We then examine how recent observations concerning the control of translesion synthesis might help refine our view of the mechanisms of immunoglobulin gene somatic hypermutation. PMID:19008194
Evaluation of Reconstructed Remote Sensing Time Series Data
NASA Astrophysics Data System (ADS)
Rivera-Camacho, J.; Didan, K.; Barreto-munoz, A.; Yitayew, M.
2011-12-01
Vegetation phenology is the study of vegetation state, function and change over time and is directly linked to the carbon cycle and an integrative measure of climate change impacts. Field observations of phenology can address some questions associated with phenology and climate change, but they are not effective at estimating and understanding large scale change in biome seasonality. Synoptic remote sensing has emerged as a practical tool for studying the land surface vegetation over large spatial and temporal scales. However, the presence of clouds, noise, inadequate processing algorithms result in poor quality data that needs to be discarded. Discarded data is so prevalent sometimes that up to 80% of the spatial and temporal coverage is missing which inhibits the proper study of vegetation phenology. To improve these data records gap filling techniques are employed. The purpose is to accurately reconstruct the VI time series profile, while preserving as much of the original data to support accurate land surface vegetation characterization. Some methods use complex Fourier Transform (FT) functions, Gaussian fitting models, or Piecewise techniques, while others are based on simpler linear interpolation. The impact of these gap filling methods on the resulting record is yet to be fully explored and characterized. In this project, we devised a new hybrid gap filling technique based on finding the seasonally variable per-pixel optimum composite period and then filling the remaining gaps with a simple local interpolation using the Inverse Distance Weighting (IDW) approach. The method is further constrained by a moving window long term average to minimize the biases that may result from over- or under-fitting. This method was applied to a 30-year sensor independent Vegetation Index ESDR from AHRR and MODIS records. To understand the impact of this gap filling technique, we performed statistical analyses to determine the error and uncertainty associated with estimating the start of season, length of season and integrated VI signal over the growing season (proxy of Gross Primary Productivity, or Carbon). Our preliminary results indicate that the time series is sensitive to the gap filling technique, particularly over areas prone to residual cloud noise and/or areas subject to long period of snow cover. This has a direct impact on the growing season characterization by making the season shorter (up to 4 weeks) and the start of the season later (up to 2 weeks). The seasonal summation of VI becomes then smaller (13%) with a direct impact on the carbon budget estimation. Another important finding is that special attention must be paid to data filtering, since this will impact the residual noise/signal in the input data and will subsequently impact the gap filling outcome.
Macke, Jeremy J; Woo, Raymund; Varich, Laura
2016-06-01
This is a retrospective review of pedicle screw placement in adolescent idiopathic scoliosis (AIS) patients under 18 years of age who underwent robot-assisted corrective surgery. Our primary objective was to characterize the accuracy of pedicle screw placement with evaluation by computed tomography (CT) after robot-assisted surgery in AIS patients. Screw malposition is the most frequent complication of pedicle screw placement and is more frequent in AIS. Given the potential for serious complications, the need for improved accuracy of screw placement has spurred multiple innovations including robot-assisted guidance devices. No studies to date have evaluated this robot-assisted technique using CT exclusively within the AIS population. Fifty patients were included in the study. All operative procedures were performed at a single institution by a single pediatric orthopedic surgeon. We evaluated the grade of screw breach, the direction of screw breach, and the positioning of the patient for preoperative scan (supine versus prone). Of 662 screws evaluated, 48 screws (7.2 %) demonstrated a breach of greater than 2 mm. With preoperative prone position CT scanning, only 2.4 % of screws were found to have this degree of breach. Medial malposition was found in 3 % of screws, a rate which decreased to 0 % with preoperative prone position scanning. Based on our results, we conclude that the proper use of image-guided robot-assisted surgery can improve the accuracy and safety of thoracic pedicle screw placement in patients with adolescent idiopathic scoliosis. This is the first study to evaluate the accuracy of pedicle screw placement using CT assessment in robot-assisted surgical correction of patients with AIS. In our study, the robot-assisted screw misplacement rate was lower than similarly constructed studies evaluating conventional (non-robot-assisted) procedures. If patients are preoperatively scanned in the prone position, the misplacement rate is further decreased.
Lobo, Elena; Dalling, James W.
2014-01-01
Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition. PMID:24452032
Magellan spacecraft and memory state tracking: Lessons learned, future thoughts
NASA Technical Reports Server (NTRS)
Bucher, Allen W.
1993-01-01
Numerous studies have been dedicated to improving the two main elements of Spacecraft Mission Operations: Command and Telemetry. As a result, not much attention has been given to other tasks that can become tedious, repetitive, and error prone. One such task is Spacecraft and Memory State Tracking, the process by which the status of critical spacecraft components, parameters, and the contents of on-board memory are managed on the ground to maintain knowledge of spacecraft and memory states for future testing, anomaly investigation, and on-board memory reconstruction. The task of Spacecraft and Memory State Tracking has traditionally been a manual task allocated to Mission Operations Procedures. During nominal Mission Operations this job is tedious and error prone. Because the task is not complex and can be accomplished manually, the worth of a sophisticated software tool is often questioned. However, in the event of an anomaly which alters spacecraft components autonomously or a memory anomaly such as a corrupt memory or flight software error, an accurate ground image that can be reconstructed quickly is a priceless commodity. This study explores the process of Spacecraft and Memory State Tracking used by the Magellan Spacecraft Team highlighting its strengths as well as identifying lessons learned during the primary and extended missions, two memory anomalies, and other hardships encountered due to incomplete knowledge of spacecraft states. Ideas for future state tracking tools that require minimal user interaction and are integrated into the Ground Data System will also be discussed.
Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize
2018-06-01
The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.
Magellan spacecraft and memory state tracking: Lessons learned, future thoughts
NASA Astrophysics Data System (ADS)
Bucher, Allen W.
1993-03-01
Numerous studies have been dedicated to improving the two main elements of Spacecraft Mission Operations: Command and Telemetry. As a result, not much attention has been given to other tasks that can become tedious, repetitive, and error prone. One such task is Spacecraft and Memory State Tracking, the process by which the status of critical spacecraft components, parameters, and the contents of on-board memory are managed on the ground to maintain knowledge of spacecraft and memory states for future testing, anomaly investigation, and on-board memory reconstruction. The task of Spacecraft and Memory State Tracking has traditionally been a manual task allocated to Mission Operations Procedures. During nominal Mission Operations this job is tedious and error prone. Because the task is not complex and can be accomplished manually, the worth of a sophisticated software tool is often questioned. However, in the event of an anomaly which alters spacecraft components autonomously or a memory anomaly such as a corrupt memory or flight software error, an accurate ground image that can be reconstructed quickly is a priceless commodity. This study explores the process of Spacecraft and Memory State Tracking used by the Magellan Spacecraft Team highlighting its strengths as well as identifying lessons learned during the primary and extended missions, two memory anomalies, and other hardships encountered due to incomplete knowledge of spacecraft states. Ideas for future state tracking tools that require minimal user interaction and are integrated into the Ground Data System will also be discussed.
Chancey, Eric T; Bliss, James P; Yamani, Yusuke; Handley, Holly A H
2017-05-01
This study provides a theoretical link between trust and the compliance-reliance paradigm. We propose that for trust mediation to occur, the operator must be presented with a salient choice, and there must be an element of risk for dependence. Research suggests that false alarms and misses affect dependence via two independent processes, hypothesized as trust in signals and trust in nonsignals. These two trust types manifest in categorically different behaviors: compliance and reliance. Eighty-eight participants completed a primary flight task and a secondary signaling system task. Participants evaluated their trust according to the informational bases of trust: performance, process, and purpose. Participants were in a high- or low-risk group. Signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. False-alarm rate affected compliance but not reliance. Miss rate affected reliance but not compliance. Mediation analyses indicated that trust mediated the relationship between false-alarm rate and compliance. Bayesian mediation analyses favored evidence indicating trust did not mediate miss rate and reliance. Conditional indirect effects indicated that factors of trust mediated the relationship between false-alarm rate and compliance (i.e., purpose) and reliance (i.e., process) but only in the high-risk group. The compliance-reliance paradigm is not the reflection of two types of trust. This research could be used to update training and design recommendations that are based upon the assumption that trust causes operator responses regardless of error bias.
Using Block-local Atomicity to Detect Stale-value Concurrency Errors
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus; Biere, Armin
2004-01-01
Data races do not cover all kinds of concurrency errors. This paper presents a data-flow-based technique to find stale-value errors, which are not found by low-level and high-level data race algorithms. Stale values denote copies of shared data where the copy is no longer synchronized. The algorithm to detect such values works as a consistency check that does not require any assumptions or annotations of the program. It has been implemented as a static analysis in JNuke. The analysis is sound and requires only a single execution trace if implemented as a run-time checking algorithm. Being based on an analysis of Java bytecode, it encompasses the full program semantics, including arbitrarily complex expressions. Related techniques are more complex and more prone to over-reporting.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Closing the Achievement Gap as Addressed in Student Support Programs
ERIC Educational Resources Information Center
Gordon, Vincent Hoover Adams, Jr.
2012-01-01
This research will focus on three components: (1) factors contributing to the achievement gap, (2) common errors made by policy makers with regard to school reform, and (3) recommendations to educators, policy makers, and parents on closing the achievement gap through results-based student support programs. Examples of each of the three components…
An Ongoing Program of Radial Velocities of Nearby Stars
NASA Astrophysics Data System (ADS)
Sperauskas, J.; Boyle, R. P.; Harlow, J.; Jahreiss, H.; Upgren, A. R.
2003-12-01
The lists of stars found by Vyssotsky at the McCormick Observatory and the Fourth Edition of the Catalog of Nearby Stars (CNS4) complement each other. Each was limited in a different way, but together they can be used to evaluate sources of systematic error in either of them. The lists of Vyssotsky comprise almost 900 stars, brighter than a limiting visual magnitude of about 11.5. and thus form a magnitude-limited sample. The CNS4 includes all stars believed to be within 25 parsecs of the Sun, and thus forms a distance-limited group. Limits in magnitude are prone to the Malmquist bias by which stars of a given range in magnitude may average spuriously brighter than stars within a given distance range appropriate for the mean distance modulus. The CNS4 stars may be subject to a slight Lutz-Kelker effect. This also requires a correction that depends mainly on the ratios of the standard errors in the distances to the stars, to the distances, themselves. This is a status report on a survey seeking completeness in the six dynamical properties (positions along the three orthogonal axes, and their first time-derivatives). Parallax, proper motion and radial velocity are the stellar properties required for this information and, as is frequently the case among sets of faint stars, the radial velocities are not always available. We seek to obtain radial velocities for a full dynamical picture for more than one thousand nearby stars of which some two-thirds have been observed. It would be most desirable to follow with age-related measures for all stars
Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.
2010-01-01
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model’s characterizations of the subjects’ naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task. PMID:21085621
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Toyoshima, Mitsuo; Maeoka, Yukinori; Kawahara, Hitoshi; Maegaki, Yoshihiro; Ohno, Kousaku
2006-11-01
We report 10 cases of pulmonary atelectasis diagnosed by chest computed tomography in patients with neurological or muscular disease. Atelectasis was frequently seen in hypotonic patients who could not roll over on their own. The atelectases located mostly in the dorsal bronchopulmonary segments, adjacent to the heart or diaphragm. Atelectasis diminished in two patients after they became able to roll themselves over. Gravity-related lung compression by the heart and intra-abdominal organs on persistent supine position can cause pulmonary atelectasis in patients with neurological or muscular disease who can not roll over by their own power. To confirm that the prone position reduces compression of the lungs, chest computed tomography was performed in both the supine and the prone position in three patients. Sagittal images with three-dimensional computed tomographic reconstruction revealed significant sternad displacements of the heart and caudal displacements of the dorsal portion of the diaphragm on prone position compared with supine position. The prone position, motor exercises for rolling over, and biphasic cuirass ventilation are effective in reducing gravity-related lung compression. Some patients with intellectual disabilities were also able to cooperate in chest physiotherapy. Chest physiotherapy is useful in preventing atelectasis in patients with neurological or muscular disease.
Spatial calibration of an optical see-through head mounted display
Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew
2010-01-01
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125
Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M
2016-12-01
Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
A study on suppressing transmittance fluctuations for air-gapped Glan-type polarizing prisms
NASA Astrophysics Data System (ADS)
Zhang, Chuanfa; Li, Dailin; Zhu, Huafeng; Li, Chuanzhi; Jiao, Zhiyong; Wang, Ning; Xu, Zhaopeng; Wang, Xiumin; Song, Lianke
2018-05-01
Light intensity transmittance is a key parameter for the design of polarizing prisms, while sometimes its experimental curves based on spatial incident angle presents periodical fluctuations. Here, we propose a novel method for completely suppressing these fluctuations via setting a glued error angle in the air gap of Glan-Taylor prisms. The proposal consists of: an accurate formula of the intensity transmittance for Glan-Taylor prisms, a numerical simulation and a contrast experiment of Glan-Taylor prisms for analyzing the causes of the fluctuations, and a simple method for accurately measuring the glued error angle. The result indicates that when the setting glued error angle is larger than the critical angle for a certain polarizing prism, the fluctuations can be completely suppressed, and a smooth intensity transmittance curve can be obtained. Besides, the critical angle in the air gap for suppressing the fluctuations is decreased with the increase of beam spot size. This method has the advantage of having less demand for the prism position in optical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X; Yang, F
Purpose: Knowing MLC leaf positioning error over the course of treatment would be valuable for treatment planning, QA design, and patient safety. The objective of the current study was to quantify the MLC positioning accuracy for VMAT delivery of head and neck treatment plans. Methods: A total of 837 MLC log files were collected from 14 head and neck cancer patients undergoing full arc VMAT treatment on one Varian Trilogy machine. The actual and planned leaf gaps were extracted from the retrieved MLC log files. For a given patient, the leaf gap error percentage (LGEP), defined as the ratio ofmore » the actual leaf gap over the planned, was evaluated for each leaf pair at all the gantry angles recorded over the course of the treatment. Statistics describing the distribution of the largest LGEP (LLGEP) of the 60 leaf pairs including the maximum, minimum, mean, Kurtosis, and skewness were evaluated. Results: For the 14 studied patients, their PTV located at tonsil, base of tongue, larynx, supraglottis, nasal cavity, and thyroid gland with volume ranging from 72.0 cm{sup 3} to 602.0 cm{sup 3}. The identified LLGEP differed between patients. It ranged from 183.9% to 457.7% with a mean of 368.6%. For the majority of the patients, the LLGEP distributions peaked at non-zero positions and showed no obvious dependence on gantry rotations. Kurtosis and skewness, with minimum/maximum of 66.6/217.9 and 6.5/12.6, respectively, suggested relatively more peaked while right-skewed leaf error distribution pattern. Conclusion: The results indicate pattern of MLC leaf gap error differs between patients of lesion located at similar anatomic site. Understanding the systemic mechanisms underlying these observed error patterns necessitates examining more patient-specific plan parameters in a large patient cohort setting.« less
Sarah M. Pinto; Yvette K. Ortega
2016-01-01
Many systems are prone to both exotic plant invasion and frequent natural disturbances. Native species richness can buffer the effects of invasion or disturbance when imposed in isolation, but it is largely unknown whether richness provides substantial resistance against invader impact in the face of disturbance. We experimentally examined how disturbance (...
Can butterflies evade fire? Pupa location and heat tolerance in fire prone habitats of Florida
USDA-ARS?s Scientific Manuscript database
The imperiled frosted elfin butterfly, Callophrys irus Godart, is restricted to frequently disturbed habitats where its larval host plants, Lupinus perennis L. and Baptisia tinctoria (L.) R. Br. occur. C. irus pupae are noted to reside in both leaf litter and soil, which may allow them to escape dir...
Frequent methodological errors in clinical research.
Silva Aycaguer, L C
2018-03-07
Several errors that are frequently present in clinical research are listed, discussed and illustrated. A distinction is made between what can be considered an "error" arising from ignorance or neglect, from what stems from a lack of integrity of researchers, although it is recognized and documented that it is not easy to establish when we are in a case and when in another. The work does not intend to make an exhaustive inventory of such problems, but focuses on those that, while frequent, are usually less evident or less marked in the various lists that have been published with this type of problems. It has been a decision to develop in detail the examples that illustrate the problems identified, instead of making a list of errors accompanied by an epidermal description of their characteristics. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Hochman, Eldad Yitzhak; Orr, Joseph M; Gehring, William J
2014-02-01
Cognitive control in the posterior medial frontal cortex (pMFC) is formulated in models that emphasize adaptive behavior driven by a computation evaluating the degree of difference between 2 conflicting responses. These functions are manifested by an event-related brain potential component coined the error-related negativity (ERN). We hypothesized that the ERN represents a regulative rather than evaluative pMFC process, exerted over the error motor representation, expediting the execution of a corrective response. We manipulated the motor representations of the error and the correct response to varying degrees. The ERN was greater when 1) the error response was more potent than when the correct response was more potent, 2) more errors were committed, 3) fewer and slower corrections were observed, and 4) the error response shared fewer motor features with the correct response. In their current forms, several prominent models of the pMFC cannot be reconciled with these findings. We suggest that a prepotent, unintended error is prone to reach the manual motor processor responsible for response execution before a nonpotent, intended correct response. In this case, the correct response is a correction and its execution must wait until the error is aborted. The ERN may reflect pMFC activity that aimed to suppress the error.
Comprehensive analysis of a medication dosing error related to CPOE.
Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L
2005-01-01
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.
Pesko, Kendra N; Fitzpatrick, Kelly A; Ryan, Elizabeth M; Shi, Pei-Yong; Zhang, Bo; Lennon, Niall J; Newman, Ruchi M; Henn, Matthew R; Ebel, Gregory D
2012-05-25
Most RNA viruses exist in their hosts as a heterogeneous population of related variants. Due to error prone replication, mutants are constantly generated which may differ in individual fitness from the population as a whole. Here we characterize three WNV isolates that contain, along with full-length genomes, mutants with large internal deletions to structural and nonstructural protein-coding regions. The isolates were all obtained from lorikeets that died from WNV at the Rio Grande Zoo in Albuquerque, NM between 2005 and 2007. The deletions are approximately 2kb, in frame, and result in the elimination of the complete envelope, and portions of the prM and NS-1 proteins. In Vero cell culture, these internally deleted WNV genomes function as defective interfering particles, reducing the production of full-length virus when introduced at high multiplicities of infection. In mosquitoes, the shortened WNV genomes reduced infection and dissemination rates, and virus titers overall, and were not detected in legs or salivary secretions at 14 or 21 days post-infection. In mice, inoculation with internally deleted genomes did not attenuate pathogenesis relative to full-length or infectious clone derived virus, and shortened genomes were not detected in mice at the time of death. These observations provide evidence that large deletions may occur within flavivirus populations more frequently than has generally been appreciated and suggest that they impact population phenotype minimally. Additionally, our findings suggest that highly similar mutants may frequently occur in particular vertebrate hosts. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Murphy, Jeremy W.; Foxe, John J.; Molholm, Sophie
2016-01-01
The ability to attend to one among multiple sources of information is central to everyday functioning. Just as central is the ability to switch attention among competing inputs as the task at hand changes. Such processes develop surprisingly slowly, such that even into adolescence, we remain slower and more error prone at switching among tasks…
Real-time monitoring of clinical processes using complex event processing and transition systems.
Meinecke, Sebastian
2014-01-01
Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.
"Truth be told" - Semantic memory as the scaffold for veridical communication.
Hayes, Brett K; Ramanan, Siddharth; Irish, Muireann
2018-01-01
Theoretical accounts placing episodic memory as central to constructive and communicative functions neglect the role of semantic memory. We argue that the decontextualized nature of semantic schemas largely supersedes the computational bottleneck and error-prone nature of episodic memory. Rather, neuroimaging and neuropsychological evidence of episodic-semantic interactions suggest that an integrative framework more accurately captures the mechanisms underpinning social communication.
Automated lattice data generation
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.
2018-03-01
The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.
ERIC Educational Resources Information Center
Frings, Christian; Spence, Charles
2011-01-01
Negative priming (NP) refers to the finding that people's responses to probe targets previously presented as prime distractors are usually slower and more error prone than to unrepeated stimuli. In a typical NP experiment, each probe target is accompanied by a distractor. It is an accepted, albeit puzzling, finding that the NP effect depends on…
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An abstract language for specifying Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1986-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
ERIC Educational Resources Information Center
Foreman, David; Morton, Stephanie; Ford, Tamsin
2009-01-01
Background: The clinical diagnosis of ADHD is time-consuming and error-prone. Secondary care referral results in long waiting times, but primary care staff may not provide reliable diagnoses. The Development And Well-Being Assessment (DAWBA) is a standardised assessment for common child mental health problems, including attention…
Measuring Diameters Of Large Vessels
NASA Technical Reports Server (NTRS)
Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.
1990-01-01
Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).
Lyn, Heidi
2007-10-01
Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.
Low-energy fracture of posterolateral tibial plateau: treatment by a posterolateral prone approach.
Yu, Guang-Rong; Xia, Jiang; Zhou, Jia-Qian; Yang, Yun-Feng
2012-05-01
Most of the posterolateral tibial plateau fractures are caused by low-energy injury. The posterior fracture fragment could not be exposed and reduced well through traditional approaches. The aim of this study was to review the results of surgical treatment of this kind of fracture using posterolateral approach with patient in prone position. The low-energy posterolateral fracture is defined as the main part of articular depression or split fragment limited within the posterior half of the lateral column. Direct reduction and buttress plate fixation through the posterolateral prone approach was applied in all the patients. In our series, 15 of 132 (11.4%) patients with tibial plateau fractures were identified as low-energy posterolateral fractures. The clinical outcomes were available in 14 of the 15 patients through phone interviews and chart reviews. Mean follow-up was 35.1 months (range: 24-48 months). All the patients had anatomic or good reductions (≤ 2 mm step/gap). Average range of motion was 0.7 degrees to 123.2 degrees (5-110 degrees to 0-140 degrees). The complications were limited to one superficial wound infection, two slight flexion contractures, and five implants removal. The average modified hospital for special surgery knee score was 93.4 (range: 86-100). The posterolateral prone approach provides excellent visualization, which can facilitate the reduction and posterior buttress plate fixation for low-energy posterolateral tibial plateau fractures and shows encouraging results. V, therapeutic study.
Andrew D. Richardson; David Y. Hollinger
2007-01-01
Missing values in any data set create problems for researchers. The process by which missing values are replaced, and the data set is made complete, is generally referred to as imputation. Within the eddy flux community, the term "gap filling" is more commonly applied. A major challenge is that random errors in measured data result in uncertainty in the gap-...
Zarian, D. A.; Peter, S. A.; Lee, S.; Kleinfeld, M.
1989-01-01
A retrospective study of 81 patients with dementia in a long-term care facility was conducted to determine the causes and frequency of acute hospitalization and the cause of death in the patients who succumbed during the acute hospital admission. Pneumonia and urinary tract infections were the most frequent causes of acute hospitalization; septicemia and respiratory failure were the most frequent causes of death. These results suggest that patients with dementia are prone to acquire life-threatening infections. Preventive measures to decrease the incidence of these complications are discussed. PMID:2500533
Environmental characteristics of annual pico/nanophytoplankton blooms along the Qinhuangdao Coast
NASA Astrophysics Data System (ADS)
Cao, Xihua; Yu, Zhiming; Wu, Zaixing; Cheng, Fangjin; He, Liyan; Yuan, Yongquan; Song, Xiuxian; Zhang, Jianle; Zhang, Yongfeng; Zhang, Wanlei
2018-03-01
Blooms of some pico/nanophytoplankton have occurred frequently along the Qinhuangdao coast since 2009, and it is necessary to identify the critical environmental factors inducing them. In this study, variations in the physical and nutrient characteristics of the seawater were analyzed following the development of local blooms in 2013. The local environmental characteristics were also compared with those of the Changjiang River estuary, China, and the Long Island estuaries in the USA, which are also prone to blooms of special algal species. In Qinhuangdao the local water temperature varied seasonally and rose above 15°C in 2013 early summer, coincident with the water discoloration. The salinity was more than 28 with a variation range of <3 throughout the year. Our results suggest that the physical conditions of the Qinhuangdao coastal area were suitable for the explosive proliferation of certain pico/nanophytoplankton, e.g. Aureococcus anophagefferens. The water supporting the bloom was not in a condition of serious eutrophication, but there were relatively high concentrations of reduced nitrogen (especially ammonium), which acted as an important nitrogen source for the pico/nanophytoplankton bloom. There was also a large gap between total nitrogen (TN) and dissolved inorganic nitrogen (DIN). Although the phosphate concentration was relatively low, there was no evidence of phosphorus limitation to the growth of pico/nanophytoplankton during bloom events.
Sea-level Rise Increases the Frequency of Nuisance Flooding in Coastal Regions
NASA Astrophysics Data System (ADS)
Moftakhari Rostamkhani, H.; Aghakouchak, A.; Sanders, B. F.; Feldman, D.; Sweet, W.; Matthew, R.; Luke, A.
2015-12-01
The global warming-drivensea-level rise (SLR) posesa serious threat for population and assets in flood-prone coastal zones over the next century. The rate of SLR is accelerated in recent decades and is expected to increase based on current trajectories of anthropogenic activities and greenhouse gas emissions. Over the 20th century, an increase in the frequency of nuisance (minor) flooding has been reported due to the reduced gap between tidal datum and flood stage. Nuisance flooding (NF), however non-destructive, causes public inconvenience, business interruption, and substantial economic losses due to impacts such as road closures and degradation of infrastructure. It also portends an increased risk in severe floods. Here we report substantial increases in NF along the coasts of United States due to SLR over the past decades. We then take the projected SLR under the least and the most extreme representative concentration pathways (e.gRCP2.6 and RCP 8.5) to estimate the increase in NF in the near- (2030) and mid-term (2050) future. The results suggest that projected SLR will cause up to two-fold more frequent NF by 2050, compared with the 20th century. The projected increase in NF will have significant socio-economic impacts and pose public health risks especially in rapidly urbanized coastal regions.
NASA Astrophysics Data System (ADS)
Favillier, Adrien; Lopez-Saez, Jérôme; Corona, Christophe; Trappmann, Daniel; Toe, David; Stoffel, Markus; Rovéra, Georges; Berger, Frédéric
2015-10-01
Long-term records of rockfalls have proven to be scarce and typically incomplete, especially in increasingly urbanized areas where inventories are largely absent and the risk associated with rockfall events rises proportionally with urbanization. On forested slopes, tree-ring analyses may help to fill this gap, as they have been demonstrated to provide annually-resolved data on past rockfall activity over long periods. Yet, the reconstruction of rockfall chronologies has been hampered in the past by the paucity of studies that include broadleaved tree species, which are, in fact, quite common in various rockfall-prone environments. In this study, we test the sensitivity of two common, yet unstudied, broadleaved species - Quercus pubescens Willd. (Qp) and Acer opalus Mill. (Ao) - to record rockfall impacts. The approach is based on a systematic mapping of trees and the counting of visible scars on the stem surface of both species. Data are presented from a site in the Vercors massif (French Alps) where rocks are frequently detached from Valanginian limestone and marl cliffs. We compare recurrence interval maps obtained from both species and from two different sets of tree structures (i.e., single trees vs. coppice stands) based on Cohen's k coefficient and the mean absolute error. A total of 1230 scars were observed on the stem surface of 847 A. opalus and Q. pubescens trees. Both methods yield comparable results on the spatial distribution of relative rockfall activity with similar downslope decreasing recurrence intervals. Yet recurrence intervals vary significantly according to tree species and tree structure. The recurrence interval observed on the stem surface of Q. pubescens exceeds that of A. opalus by > 20 years in the lower part of the studied plot. Similarly, the recurrence interval map derived from A. opalus coppice stands, dominant at the stand scale, does not exhibit a clear spatial pattern. Differences between species may be explained by the bark thickness of Q. pubescens, which has been demonstrated to grow at twice the rate of A. opalus, thus constituting a mechanical barrier that is able to buffer low energy rockfalls and thus can avoid damage to the underlying tissues. The reasons for differences between tree structures are related to the clustered coppice-specific spatial stem distribution in clumps that could result on one hand in bigger gaps between clumps, which in turn decreases the probability of tree impacts for traveling blocks. On the other hand, data also indicate that several scars on the bark of coppice stands may stem from the same impact and thus may lead to an overestimation of rockfall activity.
Evolution of gossip-based indirect reciprocity on a bipartite network
Giardini, Francesca; Vilone, Daniele
2016-01-01
Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission. PMID:27885256
Evolution of gossip-based indirect reciprocity on a bipartite network.
Giardini, Francesca; Vilone, Daniele
2016-11-25
Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission.
Evolution of gossip-based indirect reciprocity on a bipartite network
NASA Astrophysics Data System (ADS)
Giardini, Francesca; Vilone, Daniele
2016-11-01
Cooperation can be supported by indirect reciprocity via reputation. Thanks to gossip, reputations are built and circulated and humans can identify defectors and ostracise them. However, the evolutionary stability of gossip is allegedly undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission.
Personality and the gender gap in self-employment: a multi-nation study.
Obschonka, Martin; Schmitt-Rodermund, Eva; Terracciano, Antonio
2014-01-01
What role does personality play in the pervasive gender gap in entrepreneurship across the globe? This two-study analysis focuses on self-employment in the working population and underlying gender differences in personality characteristics, thereby considering both single trait dimensions as well as a holistic, configural personality approach. Applying the five-factor model of personality, Study 1, our main study, investigates mediation models in the prediction of self-employment status utilizing self-reported personality data from large-scaled longitudinal datasets collected in the U.S., Germany, the U.K., and Australia (total N = 28,762). Study 2 analyzes (observer-rated) Big Five data collected in 51 cultures (total N = 12,156) to take a more global perspective and to explore the pancultural universality of gender differences in entrepreneurial personality characteristics. Across the four countries investigated in Study 1, none of the major five dimension of personality turned out as a consistent and robust mediator. In contrast, the holistic, configural approach yielded consistent and robust mediation results. Across the four countries, males scored higher on an entrepreneurship-prone personality profile, which in turn predicted self-employment status. These results suggest that gender differences in the intra-individual configuration of personality traits contribute to the gender gap in entrepreneurship across the globe. With the restriction of limited representativeness, the data from Study 2 suggest that the gender difference in the entrepreneurship-prone personality profile (males score higher) is widespread across many cultures, but may not exist in all. The results are discussed with an emphasis on implications for research and practice, which a particular focus on the need for more complex models that incorporate the role of personality.
Personality and the Gender Gap in Self-Employment: A Multi-Nation Study
Obschonka, Martin; Schmitt-Rodermund, Eva; Terracciano, Antonio
2014-01-01
What role does personality play in the pervasive gender gap in entrepreneurship across the globe? This two-study analysis focuses on self-employment in the working population and underlying gender differences in personality characteristics, thereby considering both single trait dimensions as well as a holistic, configural personality approach. Applying the five-factor model of personality, Study 1, our main study, investigates mediation models in the prediction of self-employment status utilizing self-reported personality data from large-scaled longitudinal datasets collected in the U.S., Germany, the U.K., and Australia (total N = 28,762). Study 2 analyzes (observer-rated) Big Five data collected in 51 cultures (total N = 12,156) to take a more global perspective and to explore the pancultural universality of gender differences in entrepreneurial personality characteristics. Across the four countries investigated in Study 1, none of the major five dimension of personality turned out as a consistent and robust mediator. In contrast, the holistic, configural approach yielded consistent and robust mediation results. Across the four countries, males scored higher on an entrepreneurship-prone personality profile, which in turn predicted self-employment status. These results suggest that gender differences in the intra-individual configuration of personality traits contribute to the gender gap in entrepreneurship across the globe. With the restriction of limited representativeness, the data from Study 2 suggest that the gender difference in the entrepreneurship-prone personality profile (males score higher) is widespread across many cultures, but may not exist in all. The results are discussed with an emphasis on implications for research and practice, which a particular focus on the need for more complex models that incorporate the role of personality. PMID:25089706
Pelham, Sabra D
2011-03-01
English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Mackenzie, Colin F; Hu, Peter; Sen, Ayan; Dutton, Rick; Seebode, Steve; Floccare, Doug; Scalea, Tom
2008-01-01
Trauma Triage errors are frequent and costly. What happens in pre-hospital care remains anecdotal because of the dual responsibility of treatment (resuscitation and stabilization) and documentation in a time-critical environment. Continuous pre-hospital vital signs waveforms and numerical trends were automatically collected in our study. Abnormalities of pulse oximeter oxygen saturation (< 95%) and validated heart rate (> 100/min) showed better prediction of injury severity, need for immediate blood transfusion, intra-abdominal surgery, tracheal intubation and chest tube insertion than Trauma Registry data or Pre-hospital provider estimations. Automated means of data collection introduced the potential for more accurate and objective reporting of patient vital signs helping in evaluating quality of care and establishing performance indicators and benchmarks. Addition of novel and existing non-invasive monitors and waveform analyses could make the pulse oximeter the decision aid of choice to improve trauma patient triage. PMID:18999022
When do people rely on affective and cognitive feelings in judgment? A review.
Greifeneder, Rainer; Bless, Herbert; Pham, Michel Tuan
2011-05-01
Although people have been shown to rely on feelings to make judgments, the conditions that moderate this reliance have not been systematically reviewed and conceptually integrated. This article addresses this gap by jointly reviewing moderators of the reliance on both subtle affective feelings and cognitive feelings of ease-of-retrieval. The review revealed that moderators of the reliance on affective and cognitive feelings are remarkably similar and can be grouped into five major categories: (a) the salience of the feelings, (b) the representativeness of the feelings for the target, (c) the relevance of the feelings to the judgment, (d) the evaluative malleability of the judgment, and (e) the level of processing intensity. Based on the reviewed evidence, it is concluded that the use of feelings as information is a frequent event and a generally sensible judgmental strategy rather than a constant source of error. Avenues for future research are discussed.
Modeling direct interband tunneling. I. Bulk semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Andrew, E-mail: pandrew@ucla.edu; Chui, Chi On; California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California 90095
Interband tunneling is frequently studied using the semiclassical Kane model, despite uncertainty about its validity. Revisiting the physical basis of this formula, we find that it neglects coupling to other bands and underestimates transverse tunneling. As a result, significant errors can arise at low and high fields for small and large gap materials, respectively. We derive a simple multiband tunneling model to correct these defects analytically without arbitrary parameters. Through extensive comparison with band structure and quantum transport calculations for bulk InGaAs, InAs, and InSb, we probe the accuracy of the Kane and multiband formulas and establish the superiority ofmore » the latter. We also show that the nonlocal average electric field should be used when applying either of these models to nonuniform potentials. Our findings are important for efficient analysis and simulation of bulk semiconductor devices involving tunneling.« less
A key for predicting postfire successional trajectories in black spruce stands of interior Alaska.
Jill F. Johnstone; Teresa N. Hollingsworth; F. Stuart Chapin
2008-01-01
Black spruce (Picea mariana (Mill) B.S.P) is the dominant forest cover type in interior Alaska and is prone to frequent, stand-replacing wildfires. Through impacts on tree recruitment, the degree of fire consumption of soil organic layers can act as an important determinant of whether black spruce forests regenerate to a forest composition similar...
Soil properties in fire-consumed log burnout openings in a Missouri oak savanna
Charles C. Rhoades; A. J. Meier; A. J. Rebertus
2004-01-01
Downed logs are known to increase species diversity in many forest ecosystems by increasing resource and structural complexity and by altering fire behavior in fire-prone ecosystems. In a frequently burned oak savanna in central Missouri, combustion of downed logs formed patches that have remained free of herbaceous vegetation for more than 3 years. To assess the...
R. Sollmann; Angela White; Gina Tarbill; Patricia Manley; Eric E. Knapp
2016-01-01
In the dry forests of the western United States frequent fires historically maintained a diversity of habitats in multiple seral stages. Over the past century, fire suppression and preferential harvest of large trees has led to a densification and homogenization of forests, making them more prone to larger and more severe wildfires. In response, fuel reduction...
Dietary Assessment in Food Environment Research
Kirkpatrick, Sharon I.; Reedy, Jill; Butler, Eboneé N.; Dodd, Kevin W.; Subar, Amy F.; Thompson, Frances E.; McKinnon, Robin A.
2015-01-01
Context The existing evidence on food environments and diet is inconsistent, potentially due in part to heterogeneity in measures used to assess diet. The objective of this review, conducted in 2012–2013, was to examine measures of dietary intake utilized in food environment research. Evidence acquisition Included studies were published from January 2007 through June 2012 and assessed relationships between at least one food environment exposure and at least one dietary outcome. Fifty-one articles were identified using PubMed, Scopus, Web of Knowledge, and PsycINFO; references listed in the papers reviewed and relevant review articles; and the National Cancer Institute's Measures of the Food Environment website. The frequency of the use of dietary intake measures and assessment of specific dietary outcomes was examined, as were patterns of results among studies using different dietary measures. Evidence synthesis The majority of studies used brief instruments, such as screeners or one or two questions, to assess intake. Food frequency questionnaires were used in about a third of studies, one in ten used 24-hour recalls, and fewer than one in twenty used diaries. Little consideration of dietary measurement error was evident. Associations between the food environment and diet were more consistently in the expected direction in studies using less error-prone measures. Conclusions There is a tendency toward the use of brief dietary assessment instruments with low cost and burden rather than more detailed instruments that capture intake with less bias. Use of error-prone dietary measures may lead to spurious findings and reduced power to detect associations. PMID:24355678
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Variations of Human DNA Polymerase Genes as Biomarkers of Prostate Cancer Progression
2013-07-01
discovery , cancer genetics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC...variations identified (including all single and double mutant combinations of the Triple mutant), and some POLK mutants • Discovery of a novel...Athens, Greece, 07/10 Makridakis N. Error-prone polymerase mutations and prostate cancer progression, COBRE /Cancer Genetics group seminar, Tulane
The expanding polymerase universe.
Goodman, M F; Tippin, B
2000-11-01
Over the past year, the number of known prokaryotic and eukaryotic DNA polymerases has exploded. Many of these newly discovered enzymes copy aberrant bases in the DNA template over which 'respectable' polymerases fear to tread. The next step is to unravel their functions, which are thought to range from error-prone copying of DNA lesions, somatic hypermutation and avoidance of skin cancer, to restarting stalled replication forks and repairing double-stranded DNA breaks.
Error rates and resource overheads of encoded three-qubit gates
NASA Astrophysics Data System (ADS)
Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.
2017-10-01
A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
CTF Preprocessor User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Salko, Robert K.
2016-05-26
This document describes how a user should go about using the CTF pre- processor tool to create an input deck for modeling rod-bundle geometry in CTF. The tool was designed to generate input decks in a quick and less error-prone manner for CTF. The pre-processor is a completely independent utility, written in Fortran, that takes a reduced amount of input from the user. The information that the user must supply is basic information on bundle geometry, such as rod pitch, clad thickness, and axial location of spacer grids--the pre-processor takes this basic information and determines channel placement and connection informationmore » to be written to the input deck, which is the most time-consuming and error-prone segment of creating a deck. Creation of the model is also more intuitive, as the user can specify assembly and water-tube placement using visual maps instead of having to place them by determining channel/channel and rod/channel connections. As an example of the benefit of the pre-processor, a quarter-core model that contains 500,000 scalar-mesh cells was read into CTF from an input deck containing 200,000 lines of data. This 200,000 line input deck was produced automatically from a set of pre-processor decks that contained only 300 lines of data.« less
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
A Novel Way to Relate Ontology Classes
Choksi, Ami T.; Jinwala, Devesh C.
2015-01-01
The existing ontologies in the semantic web typically have anonymous union and intersection classes. The anonymous classes are limited in scope and may not be part of the whole inference process. The tools, namely, the pellet, the jena, and the protégé, interpret collection classes as (a) equivalent/subclasses of union class and (b) superclasses of intersection class. As a result, there is a possibility that the tools will produce error prone inference results for relations, namely, sub-, union, intersection, equivalent relations, and those dependent on these relations, namely, complement. To verify whether a class is complement of other involves utilization of sub- and equivalent relations. Motivated by the same, we (i) refine the test data set of the conference ontology by adding named, union, and intersection classes and (ii) propose a match algorithm to (a) calculate corrected subclasses list, (b) correctly relate intersection and union classes with their collection classes, and (c) match union, intersection, sub-, complement, and equivalent classes in a proper sequence, to avoid error prone match results. We compare the results of our algorithms with those of a candidate reasoner, namely, the pellet reasoner. To the best of our knowledge, ours is a unique attempt in establishing a novel way to relate ontology classes. PMID:25984560
2014-06-01
incremental increase in contamination and pollution, construction of unsafe structures in flood-prone areas, adverse effects of income gap and poverty...them claim that climate change, contamination , ozone depletion, and biodiversity loss are major factors, others believe that the real causes for the...the international arena, as well as the host country, while the second one has very little physical protection and is often faced with food , water
ERIC Educational Resources Information Center
Bernstein, Stuart E.
2009-01-01
A descriptive study of vowel spelling errors made by children first diagnosed with dyslexia (n = 79) revealed that phonological errors, such as "bet" for "bat", outnumbered orthographic errors, such as "bate" for "bait". These errors were more frequent in nonwords than words, suggesting that lexical context helps with vowel spelling. In a second…
The Sources of Error in Spanish Writing.
ERIC Educational Resources Information Center
Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.
1999-01-01
Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)
Method and apparatus for controlling electrode gap during vacuum consumable arc remelting
Fisher, R.W.; Maroone, J.P.; Tipping, D.W.; Zanner, F.J.
During vacuum consumable arc remelting the electrode gap between a consumable electrode and a pool of molten metal is difficult to control. The present invention monitors drop shorts by detecting a decrease in the voltage between the consumable electrode and molten pool. The drop shorts and their associated voltage reductions occur as repetitive pulses which are closely correlated to the electrode gap. Thus, the method and apparatus of the present invention controls electrode gap based upon drop shorts detected from the monitored anode-cathode voltage. The number of drop shorts are accumulated, and each time the number of drop shorts reach a predetermined number, the average period between drop shorts is calculated from this predetermined number and the time in which this number is accumulated. This average drop short period is used in a drop short period electrode gap model which determines the actual electrode gap from the drop short. The actual electrode gap is then compared with a desired electrode gap which is selected to produce optimum operating conditions and the velocity of the consumable error is varied based upon the gap error. The consumable electrode is driven according to any prior art system at this velocity. In the preferred embodiment, a microprocessor system is utilized to perform the necessary calculations and further to monitor the duration of each drop short. If any drop short exceeds a preset duration period, the consumable electrode is rapidly retracted a predetermined distance to prevent bonding of the consumable electrode to the molten remelt.
Drop short control of electrode gap
Fisher, Robert W.; Maroone, James P.; Tipping, Donald W.; Zanner, Frank J.
1986-01-01
During vacuum consumable arc remelting the electrode gap between a consumable electrode and a pool of molten metal is difficult to control. The present invention monitors drop shorts by detecting a decrease in the voltage between the consumable electrode and molten pool. The drop shorts and their associated voltage reductions occur as repetitive pulses which are closely correlated to the electrode gap. Thus, the method and apparatus of the present invention controls electrode gap based upon drop shorts detected from the monitored anode-cathode voltage. The number of drop shorts are accumulated, and each time the number of drop shorts reach a predetermined number, the average period between drop shorts is calculated from this predetermined number and the time in which this number is accumulated. This average drop short period is used in a drop short period electrode gap model which determines the actual electrode gap from the drop short. The actual electrode gap is then compared with a desired electrode gap which is selected to produce optimum operating conditions and the velocity of the consumable error is varied based upon the gap error. The consumable electrode is driven according to any prior art system at this velocity. In the preferred embodiment, a microprocessor system is utilized to perform the necessary calculations and further to monitor the duration of each drop short. If any drop short exceeds a preset duration period, the consumable electrode is rapidly retracted a predetermined distance to prevent bonding of the consumable electrode to the molten remelt.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Types of diagnostic errors in neurological emergencies in the emergency department.
Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V
2015-02-01
Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.
Prescribed fire and its impacts on ecosystem services in the UK.
Harper, Ashleigh R; Doerr, Stefan H; Santin, Cristina; Froyd, Cynthia A; Sinnadurai, Paul
2018-05-15
The impacts of vegetation fires on ecosystems are complex and varied affecting a range of important ecosystem services. Fire has the potential to affect the physicochemical and ecological status of water systems, alter several aspects of the carbon cycle (e.g. above- and below-ground carbon storage) and trigger changes in vegetation type and structure. Globally, fire is an essential part of land management in fire-prone regions in, e.g. Australia, the USA and some Mediterranean countries to mitigate the likelihood of catastrophic wildfires and sustain healthy ecosystems. In the less-fire prone UK, fire has a long history of usage in management for enhancing the productivity of heather, red grouse and sheep. This distinctly different socioeconomic tradition of burning underlies some of the controversy in recent decades in the UK around the use of fire. Negative public opinion and opposition from popular media have highlighted concerns around the detrimental impacts burning can have on the health and diversity of upland habitats. It is evident there are many gaps in the current knowledge around the environmental impacts of prescribed burning in less fire-prone regions (e.g. UK). Land owners and managers require a greater level of certainty on the advantages and disadvantages of prescribed burning in comparison to other techniques to better inform management practices. This paper addresses this gap by providing a critical review of published work and future research directions related to the impacts of prescribed fire on three key aspects of ecosystem services: (i) water quality, (ii) carbon dynamics and (iii) habitat composition and structure (biodiversity). Its overall aims are to provide guidance based on the current state-of-the-art for researchers, land owners, managers and policy makers on the potential effects of the use of burning and to inform the wider debate about the place of fire in modern conservation and land management in humid temperate ecosystems. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Advanced Outage and Control Center: Strategies for Nuclear Plant Outage Work Status Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Weatherby
The research effort is a part of the Light Water Reactor Sustainability (LWRS) Program. LWRS is a research and development program sponsored by the Department of Energy, performed in close collaboration with industry to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. The LWRS Program serves to help the US nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The Outage Control Center (OCC) Pilot Project was directed at carrying out the applied researchmore » for development and pilot of technology designed to enhance safe outage and maintenance operations, improve human performance and reliability, increase overall operational efficiency, and improve plant status control. Plant outage management is a high priority concern for the nuclear industry from cost and safety perspectives. Unfortunately, many of the underlying technologies supporting outage control are the same as those used in the 1980’s. They depend heavily upon large teams of staff, multiple work and coordination locations, and manual administrative actions that require large amounts of paper. Previous work in human reliability analysis suggests that many repetitive tasks, including paper work tasks, may have a failure rate of 1.0E-3 or higher (Gertman, 1996). With between 10,000 and 45,000 subtasks being performed during an outage (Gomes, 1996), the opportunity for human error of some consequence is a realistic concern. Although a number of factors exist that can make these errors recoverable, reducing and effectively coordinating the sheer number of tasks to be performed, particularly those that are error prone, has the potential to enhance outage efficiency and safety. Additionally, outage management requires precise coordination of work groups that do not always share similar objectives. Outage managers are concerned with schedule and cost, union workers are concerned with performing work that is commensurate with their trade, and support functions (safety, quality assurance, and radiological controls, etc.) are concerned with performing the work within the plants controls and procedures. Approaches to outage management should be designed to increase the active participation of work groups and managers in making decisions that closed the gap between competing objectives and the potential for error and process inefficiency.« less
NASA Astrophysics Data System (ADS)
Minnett, P. J.; Liu, Y.; Kilpatrick, K. A.
2016-12-01
Sea-surface temperature (SST) measurements by satellites in the northern hemisphere high latitudes confront several difficulties. Year-round prevalent clouds, effects near ice edges, and the relative small difference between SST and low-level cloud temperatures lead to a significant loss of infrared observations regardless of the more frequent polar satellite overpasses. Recent research (Liu and Minnett, 2016) identified sampling issues in the Level 3 NASA MODIS SST products when 4km observations are aggregated into global grids at different time and space scales, particularly in the Arctic, where a binary decision cloud mask designed for global data is often overly conservative at high latitudes and results in many gaps and missing data. This under sampling of some Arctic regions results in a warm bias in Level 3 products, likely a result of warmer surface temperature, more distant from the ice edge, being identified more frequently as cloud free. Here we present an improved method for cloud detection in the Arctic using a majority vote from an ensemble of four classifiers trained based on an Alternative Decision Tree (ADT) algorithm (Freund and Mason 1999, Pfahringer et. al. 2001). This new cloud classifier increases sampling of clear pixel by 50% in several regions and generally produces cooler monthly average SST fields in the ice-free Arctic, while still retaining the same error characteristics at 1km resolution relative to in situ observations. SST time series of 12 years of MODIS (Aqua and Terra) and more recently VIIRS sensors are compared and the improvements in errors and uncertainties resulting from better cloud screening for Level 3 gridded products are assessed and summarized.
Nascimento, Eduarda Helena Leandro; Gaêta-Araujo, Hugo; Andrade, Maria Fernanda Silva; Freitas, Deborah Queiroz
2018-01-21
The aims of this study are to identify the most frequent technical errors in endodontically treated teeth and to determine which root canals were most often associated with those errors, as well as to relate endodontic technical errors and the presence of coronal restorations with periapical status by means of cone-beam computed tomography images. Six hundred eighteen endodontically treated teeth (1146 root canals) were evaluated for the quality of their endodontic treatment and for the presence of coronal restorations and periapical lesions. Each root canal was classified according to dental groups, and the endodontic technical errors were recorded. Chi-square's test and descriptive analyses were performed. Six hundred eighty root canals (59.3%) had periapical lesions. Maxillary molars and anterior teeth showed higher prevalence of periapical lesions (p < 0.05). Endodontic treatment quality and coronal restoration were associated with periapical status (p < 0.05). Underfilling was the most frequent technical error in all root canals, except for the second mesiobuccal root canal of maxillary molars and the distobuccal root canal of mandibular molars, which were non-filled in 78.4 and 30% of the cases, respectively. There is a high prevalence of apical radiolucencies, which increased in the presence of poor coronal restorations, endodontic technical errors, and when both conditions were concomitant. Underfilling was the most frequent technical error, followed by non-homogeneous and non-filled canals. Evaluation of endodontic treatment quality that considers every single root canal aims on warning dental practitioners of the prevalence of technical errors that could be avoided with careful treatment planning and execution.
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
Stereotype susceptibility narrows the gender gap in imagined self-rotation performance.
Wraga, Maryjane; Duncan, Lauren; Jacobs, Emily C; Helt, Molly; Church, Jessica
2006-10-01
Three studies examined the impact of stereotype messages on men's and women's performance of a mental rotation task involving imagined self-rotations. Experiment 1 established baseline differences between men and women; women made 12% more errors than did men. Experiment 2 found that exposure to a positive stereotype message enhanced women's performance in comparison with that of another group of women who received neutral information. In Experiment 3, men who were exposed to the same stereotype message emphasizing a female advantage made more errors than did male controls, and the magnitude of error was similar to that for women from Experiment 1. The results suggest that the gender gap in mental rotation performance is partially caused by experiential factors, particularly those induced by sociocultural stereotypes.
Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Mansy, Alzahraa
2015-09-01
Even with early surgical repair, the majority of cleft palate children demonstrate articulation errors and have typical cleft palate speech. Was to determine the nature of articulation errors of Arabic consonants in Egyptian Arabic-speaking children with velopharyngeal insufficiency (VPI). Thirty Egyptian Arabic-speaking children with VPI due to cleft palate (whether primary repaired or secondary repaired) were studied. Auditory perceptual assessment (APA) of children speech was conducted. Nasopharyngoscopy was done to assess the velopharyngeal port (VPP) movements while the child was repeating speech tasks. Mansoura Arabic Articulation test (MAAT) was performed to analyze the consonants articulation of these children. The most frequent type of articulatory errors observed was substitution, more specifically, backing. Pharyngealization of anterior fricatives was the most frequent substitution, especially for the /s/ sound. The most frequent substituting sounds for other sounds were /ʔ/ followed by /k/ and /n/ sounds. Significant correlations were found between the degrees of the open nasality and VPP closure and the articulation errors. On the other hand, the sounds (/ʔ/,/ħ/,/ʕ/,/n/,/w/,/j/) were normally articulated in all studied group. The determination of articulation errors in VPI children could guide the therapists for designing appropriate speech therapy programs for these cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-02-04
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-01-01
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Family Medicine in Ethiopia: Lessons from a Global Collaboration.
Evensen, Ann; Wondimagegn, Dawit; Zemenfes Ashebir, Daniel; Rouleau, Katherine; Haq, Cynthia; Ghavam-Rassoul, Abbas; Janakiram, Praseedha; Kvach, Elizabeth; Busse, Heidi; Conniff, James; Cornelson, Brian
2017-01-01
Building the capacity of local health systems to provide high-quality, self-sustaining medical education and health care is the central purpose for many global health partnerships (GHPs). Since 2001, our global partner consortium collaborated to establish Family Medicine in Ethiopia; the first Ethiopian family physicians graduated in February 2016. The authors, representing the primary Ethiopian, Canadian, and American partners in the GHP, identified obstacles, accomplishments, opportunities, errors, and observations from the years preceding residency launch and the first 3 years of the residency. Common themes were identified through personal reflection and presented as lessons to guide future GHPs. LESSON 1: Promote Family Medicine as a distinct specialty. LESSON 2: Avoid gaps, conflict, and redundancy in partner priorities and activities. LESSON 3: Building relationships takes time and shared experiences. LESSON 4: Communicate frequently to create opportunities for success. LESSON 5: Engage local leaders to build sustainable, long-lasting programs from the beginning of the partnership. GHPs can benefit individual participants, their organizations, and their communities served. Engaging with numerous partners may also result in challenges-conflicting expectations, misinterpretations, and duplication or gaps in efforts. The lessons discussed in this article may be used to inform GHP planning and interactions to maximize benefits and minimize mishaps. © Copyright 2017 by the American Board of Family Medicine.
Bernat, Edward M; Nelson, Lindsay D; Steele, Vaughn R; Gehring, William J; Patrick, Christopher J
2011-05-01
Externalizing is a broad construct that reflects propensity toward a variety of impulse control problems, including antisocial personality disorder and substance use disorders. Two event-related potential responses known to be reduced among individuals high in externalizing proneness are the P300, which reflects postperceptual processing of a stimulus, and the error-related negativity (ERN), which indexes performance monitoring based on endogenous representations. In the current study, the authors used a simulated gambling task to examine the relation between externalizing proneness and the feedback-related negativity (FRN), a brain response that indexes performance monitoring related to exogenous cues, which is thought to be highly related to the ERN. Time-frequency (TF) analysis was used to disentangle the FRN from the accompanying P300 response to feedback cues by parsing the overall feedback-locked potential into distinctive theta (4-7 Hz) and delta (<3 Hz) TF components. Whereas delta-P300 amplitude was reduced among individuals high in externalizing proneness, theta-FRN response was unrelated to externalizing. These findings suggest that in contrast with previously reported deficits in endogenously based performance monitoring (as indexed by the ERN), individuals prone to externalizing problems show intact monitoring of exogenous cues (as indexed by the FRN). The results also contribute to a growing body of evidence indicating that the P300 is attenuated across a broad range of task conditions in high-externalizing individuals.
A plasmid-based lacZα gene assay for DNA polymerase fidelity measurement
Keith, Brian J.; Jozwiakowski, Stanislaw K.; Connolly, Bernard A.
2013-01-01
A significantly improved DNA polymerase fidelity assay, based on a gapped plasmid containing the lacZα reporter gene in a single-stranded region, is described. Nicking at two sites flanking lacZα, and removing the excised strand by thermocycling in the presence of complementary competitor DNA, is used to generate the gap. Simple methods are presented for preparing the single-stranded competitor. The gapped plasmid can be purified, in high amounts and in a very pure state, using benzoylated–naphthoylated DEAE–cellulose, resulting in a low background mutation frequency (∼1 × 10−4). Two key parameters, the number of detectable sites and the expression frequency, necessary for measuring polymerase error rates have been determined. DNA polymerase fidelity is measured by gap filling in vitro, followed by transformation into Escherichia coli and scoring of blue/white colonies and converting the ratio to error rate. Several DNA polymerases have been used to fully validate this straightforward and highly sensitive system. PMID:23098700
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Adaptive constructive processes and the future of memory.
Schacter, Daniel L
2012-11-01
Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes and focuses in particular on the process of imagining or simulating events that might occur in one's personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.
2013-01-01
People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477
Reporting suicide attempts: consistency and its determinants in a large mental health study.
Eikelenboom, Merijn; Smit, Johannes H; Beekman, Aartjan T F; Kerkhof, Ad J F M; Penninx, Brenda W J H
2014-06-01
A lifetime history (LTH) of suicide attempts (SAs) is frequently assessed in mental health surveys. However, little is known about the reliability of assessing a LTH of SA. This study examined the consistency and its determinants of reporting a LTH of SA in a large cohort of persons with a history of depression and/or anxiety. Data are from the baseline and two-year assessments of the Netherlands Study of Depression and Anxiety. Persons with a Composite International Diagnostic Interview (CIDI)-based lifetime depressive and/or anxiety disorder (N = 1973) constitute the study sample. A LTH of SAs was assessed at baseline and at two-year follow-up. Of the persons who reported at either interview a LTH of SAs, more than one-third did not report this consistent at both interviews. Moreover, indications were found for more consistent reporting among persons with a higher number of SAs and among persons with current (severe) psychopathology as compared to those with remitted or less severe current psychopathology. Our results showed that even a salient topic as a history of SAs is prone for reporting errors, and that current psychological state influences reporting of a LTH of SAs. Copyright © 2014 John Wiley & Sons, Ltd.
Quantifying sub-pixel urban impervious surface through fusion of optical and inSAR imagery
Yang, L.; Jiang, L.; Lin, H.; Liao, M.
2009-01-01
In this study, we explored the potential to improve urban impervious surface modeling and mapping with the synergistic use of optical and Interferometric Synthetic Aperture Radar (InSAR) imagery. We used a Classification and Regression Tree (CART)-based approach to test the feasibility and accuracy of quantifying Impervious Surface Percentage (ISP) using four spectral bands of SPOT 5 high-resolution geometric (HRG) imagery and three parameters derived from the European Remote Sensing (ERS)-2 Single Look Complex (SLC) SAR image pair. Validated by an independent ISP reference dataset derived from the 33 cm-resolution digital aerial photographs, results show that the addition of InSAR data reduced the ISP modeling error rate from 15.5% to 12.9% and increased the correlation coefficient from 0.71 to 0.77. Spatially, the improvement is especially noted in areas of vacant land and bare ground, which were incorrectly mapped as urban impervious surfaces when using the optical remote sensing data. In addition, the accuracy of ISP prediction using InSAR images alone is only marginally less than that obtained by using SPOT imagery. The finding indicates the potential of using InSAR data for frequent monitoring of urban settings located in cloud-prone areas.
Application of long-term microdialysis in circadian rhythm research
Borjigin, Jimo; Liu, Tiecheng
2008-01-01
Our laboratory has pioneered long-term microdialysis to monitor pineal melatonin secretion in living animals across multiple circadian cycles. There are numerous advantages of this approach for rhythm analysis: (1) we can precisely define melatonin onset and offset phases; (2) melatonin is a reliable and stable neuroendocrine output of the circadian clock (versus behavioral output which is sensitive to stress or other factors); (3) melatonin measurements can be performed extremely frequently, permitting high temporal resolution (10 min sampling intervals), which allows detection of slight changes in phase; (4) the measurements can be performed for more than four weeks, allowing perturbations of the circadian clock to be followed long-term in the same animals; (5) this is an automated process (microdialysis coupled with on-line HPLC analysis), which increases accuracy and bypasses the labor-intensive and error-prone manual handling of dialysis samples; and (6) our approach allows real-time investigation of circadian rhythm function and permits appropriate timely adjustments of experimental conditions. The longevity of microdialysis probes, the key to the success of this approach, depends at least in part on the methods of the construction and implantation of dialysis probes. In this article, we have detailed the procedures of construction and surgical implantation of microdialysis probes used currently in our laboratory, which are significantly improved from our previous methods. PMID:18045670
Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L
2014-01-01
People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Chang, C H; Hwang, C S; Fan, T C; Chen, K H; Pan, K T; Lin, F Y; Wang, C; Chang, L H; Chen, H H; Lin, M C; Yeh, S
1998-05-01
In this work, a 1 m long Sasaki-type elliptically polarizing undulator (EPU) prototype with 5.6 cm period length is used to examine the mechanical design feasibility as well as magnetic field performance. The magnetic field characteristics of the EPU5.6 prototype at various phase shifts and gap motion are described. The field errors from mechanical tolerances, magnet block errors, end field effects and phase/gap motion effects are analysed. The procedures related to correcting the field with the block position tuning, iron shimming and the trim blocks at both ends are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Reduction of earthquake risk in the united states: Bridging the gap between research and practice
Hays, W.W.
1998-01-01
Continuing efforts under the auspices of the National Earthquake Hazards Reduction Program are under way to improve earthquake risk assessment and risk management in earthquake-prone regions of Alaska, California, Nevada, Washington, Oregon, Arizona, Utah, Wyoming, and Idaho, the New Madrid and Wabash Valley seismic zones in the central United States, the southeastern and northeastern United States, Puerto Rico, Virgin Islands, Guam, and Hawaii. Geologists, geophysicists, seismologists, architects, engineers, urban planners, emergency managers, health care specialists, and policymakers are having to work at the margins of their disciplines to bridge the gap between research and practice and to provide a social, technical, administrative, political, legal, and economic basis for changing public policies and professional practices in communities where the earthquake risk is unacceptable. ?? 1998 IEEE.
Automatic generation of computable implementation guides from clinical information models.
Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat
2015-06-01
Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.
Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition
ERIC Educational Resources Information Center
del Pilar Agustin Llach, Maria
2011-01-01
Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…
Analyzing the transmission of wildfire exposure on a fire-prone landscape in Oregon, USA
Alan A. Ager; Michelle A. Day; Mark A. Finney; Ken Vance-Borland; Nicole M. Vaillant
2014-01-01
We develop the idea of risk transmission from large wildfires and apply network analyses to understand its importance on a 0.75 million ha US national forest. Wildfires in the western US frequently burn over long distances (e.g., 20â50 km) through highly fragmented landscapes with respect to ownership, fuels, management intensity, population density, and ecological...
Ashley E. Van Beusekom; William A. Gould; A. Carolina Monmany; Azad Henareh Khalyani; Maya Quiñones; Stephen J. Fain; Maria José Andrade-Núñez; Grizelle González
2018-01-01
Abstract Assessing the relationships between weather patterns and the likelihood of fire occurrence in the Caribbean has not been as central to climate change research as in temperate regions, due in part to the smaller extent of individual fires. However, the cumulative effect of small frequent fires can shape large landscapes, and fire-prone ecosystems are abundant...
Error-free versus mutagenic processing of genomic uracil--relevance to cancer.
Krokan, Hans E; Sætrom, Pål; Aas, Per Arne; Pettersen, Henrik Sahlin; Kavli, Bodil; Slupphaug, Geir
2014-07-01
Genomic uracil is normally processed essentially error-free by base excision repair (BER), with mismatch repair (MMR) as an apparent backup for U:G mismatches. Nuclear uracil-DNA glycosylase UNG2 is the major enzyme initiating BER of uracil of U:A pairs as well as U:G mismatches. Deficiency in UNG2 results in several-fold increases in genomic uracil in mammalian cells. Thus, the alternative uracil-removing glycosylases, SMUG1, TDG and MBD4 cannot efficiently complement UNG2-deficiency. A major function of SMUG1 is probably to remove 5-hydroxymethyluracil from DNA with general back-up for UNG2 as a minor function. TDG and MBD4 remove deamination products U or T mismatched to G in CpG/mCpG contexts, but may have equally or more important functions in development, epigenetics and gene regulation. Genomic uracil was previously thought to arise only from spontaneous cytosine deamination and incorporation of dUMP, generating U:G mismatches and U:A pairs, respectively. However, the identification of activation-induced cytidine deaminase (AID) and other APOBEC family members as DNA-cytosine deaminases has spurred renewed interest in the processing of genomic uracil. Importantly, AID triggers the adaptive immune response involving error-prone processing of U:G mismatches, but also contributes to B-cell lymphomagenesis. Furthermore, mutational signatures in a substantial fraction of other human cancers are consistent with APOBEC-induced mutagenesis, with U:G mismatches as prime suspects. Mutations can be caused by replicative polymerases copying uracil in U:G mismatches, or by translesion polymerases that insert incorrect bases opposite abasic sites after uracil-removal. In addition, kataegis, localized hypermutations in one strand in the vicinity of genomic rearrangements, requires APOBEC protein, UNG2 and translesion polymerase REV1. What mechanisms govern error-free versus error prone processing of uracil in DNA remains unclear. In conclusion, genomic uracil is an essential intermediate in adaptive immunity and innate antiviral responses, but may also be a fundamental cause of a wide range of malignancies. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
The High Altitude Pollution Program (1976-1982).
1984-01-01
ground, where air pollution problems arise due to ground level emissions from, for example, automobiles and power plants) to about 25 km above the...downward and poleward. Near the ground, in areas such as cities prone to air pollution , ozone is produced by nitrogen dioxide photolysis and reaction...Spectrophotcmeter Total Ozone Measurement Errors caused by Interfering Absorbing Species Such as SO2, NO2 and Photochemically Produced 03 IN Polluted Air ," NOAA
Vap, Linda; Bohn, Andrea A
2015-01-01
Interpretation of camelid hematology results is similar to that of other mammals. Obtaining accurate results and using appropriate reference intervals can be a bit problematic, particularly when evaluating the erythron. Camelid erythrocytes vary from other mammals in that they are small, flat, and elliptical. This variation makes data obtained from samples collected from these species prone to error when using some automated instruments. Normal and abnormal findings in camelid blood are reviewed as well as how to ensure accurate results.
Coordinating Robot Teams for Disaster Relief
2015-05-01
eventually guide vehicles in cooperation with its Operator(s), but in this paper we assume static mission goals, a fixed number of vehicles, and a...is tedious and error prone. Kress-Gazit et al. (2009) instead synthesize an FSA from an LTL specification using a game theory approach (Bloem et al...helping an Operator coordinate a team of vehicles in Disaster Relief. Acknowledgements Thanks to OSD ASD (R&E) for sponsoring this research. The
Vienna Fortran - A Language Specification. Version 1.1
1992-03-01
other computer archi- tectures is the fact that the memory is physically distributed among the processors; the time required to access a non-local...datum may be an order of magnitude higher than the time taken to access locally stored data. This has important consequences for program efficiency. In...machine in many aspects. It is tedious, time -consuming and error prone. It has led to particularly slow software development cycles and, in consequence
Toward an Operational Definition of Workload: A Workload Assessment of Aviation Maneuvers
2010-08-01
and evaluated by the learner . With practice, the learner moves into the second phase, where optimal strategies are strengthened. The final stage of...The first phase demands a great amount of resources as performance is slow and prone to errors. During this phase, strategies are being formulated...asked to assess mental, physical, visual, aural , and verbal demands of each task. The new assessment is a cost effective method of assessing workload
Tropical forest landscape dynamics: Population consequences for neotropical lianas, genus Passiflora
NASA Astrophysics Data System (ADS)
Plowes, Robert Merrick
Treefall gaps in rainforest landscapes play a crucial role in providing opportunities for establishment and growth of rare, light-demanding plants such as Passifora vines in Corcovado rainforests, Costa Rica. This study considers the interplay of landscape dynamics with plant life history traits and strategies in an ephemeral patch network. In Chapter One, I show how patch quality dynamics and propagule dispersal affect colonization of treefall gaps by Passifora vitifolia. Recruitment required high patch quality, exceeding 3 hours of sunlight and patches closed after about 6 years. Colonization by seed dispersal (80%) was constrained by patch quality and isolation, while clonal growth from dormant plants (20%) was limited to rare adjacent patches. Since patch turnover is critical in these systems, Chapter Two is focused on factors affecting canopy structure. I showed that prior landuse altered the dynamics of frequent, small-scale disturbances during succession following a single, large deforestation event. Here, I used Landsat subpixel analysis, aerial photographs and field surveys to demonstrate major changes in dynamics of regenerating canopies following release from agricultural activity in 1975. Little work has considered the role of life history traits in persistence of patchy populations, and so in Chapter Three I asked what life history strategies are used by 9 Passiflora species that occur in these transient forest gaps. Although Passiflora species exhibited differences in dormancy or dispersal strategies, abundance was not associated with any one strategy. Elasticities of vital rates (stasis, growth and fecundity) of P. vitifolia differed empirically in old growth and regenerating forests. To explore population responses to changes in landscape parameters or life history strategies, I created a spatially-explicit individual-based model. Simulations indicate that plant types with a dormancy phase have a greater suite of responses since they persist after patch extinction with potential to contribute later through both sexual and asexual dispersal. Plants that rely only on high dispersal were extinction prone in low connectivity landscapes. This novel approach of jointly analyzing spatially-explicit patch parameters and life history traits offers a comprehensive framework for further understanding the effects of patch dynamics on populations.
Identification of Lightning Gaps in Mangrove Forests Using Airborne LIDAR Measurements
NASA Astrophysics Data System (ADS)
Zhang, K.
2006-12-01
Mangrove forests are highly dynamic ecosystems and change frequently due to tropical storms, frost, and lightning. These factors can cause gaps in mangrove forests by damaging trees. Compared to gaps generated by storms and frost, gaps caused by lightning strikes are small, ranging from 50 to 300 m2. However, these small gaps may play a critical role in mangrove forest dynamics because of the frequent occurrence of lightning in tropical areas. It has been hypothesized that the turnover of mangrove forests is mainly due to the death and regeneration of trees in lightning gaps. However, there is a lack of data for gap occurrence in mangrove forests to verify this hypothesis. It is impractical to measure gaps through a field survey on a large scale because of the logistic difficulties of muddy mangrove forests. Airborne light detection and ranging (LIDAR) technology is an effective alternative because it provides direct measurements of ground and canopy elevations remotely. This study developed a method to identify lightning gaps in mangrove forests in terms of LIDAR measurements. First, LIDAR points are classified into vegetation and ground measurements using the progressive morphological filter. Second, a digital canopy model (DCM) is generated by subtracting a digital terrain model (DTM) from a digital surface model (DSM). The DSM is generated by interpolating raw LIDAR measurements, and DTM is produced by interpolating ground measurements. Third, a black top-hat mathematical morphological transformation is used to identify canopy gaps. Comparison of identified gap polygons with raw LIDAR measurements and field surveys shows that the proposed method identifies lightning gaps in mangrove forests successfully. The area of lightning gaps in mangrove forests in Everglades National Park is about 3% of total forest area, which verifies that lightning gaps play a critical role in mangrove forest turnover.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Astrophysics Data System (ADS)
Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.
2015-12-01
While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.
Hsueh, Ya-seng Arthur; Brando, Alex; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
To estimate the costs of the extra resources required to close the gap of vision between Indigenous and non-Indigenous Australians. Constructing comprehensive eye care pathways for Indigenous Australians with their related probabilities, to capture full eye care usage compared with current usage rate for cataract surgery, refractive error and diabetic retinopathy using the best available data. Urban and remote regions of Australia. The provision of eye care for cataract surgery, refractive error and diabetic retinopathy. Estimated cost needed for full access, estimated current spending and estimated extra cost required to close the gaps of cataract surgery, refractive error and diabetic retinopathy for Indigenous Australians. Total cost needed for full coverage of all three major eye conditions is $45.5 million per year in 2011 Australian dollars. Current annual spending is $17.4 million. Additional yearly cost required to close the gap of vision is $28 million. This includes extra-capped funds of $3 million from the Commonwealth Government and $2 million from the State and Territory Governments. Additional coordination costs per year are $13.3 million. Although available data are limited, this study has produced the first estimates that are indicative of the need for planning and provide equity in eye care. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
Poster - 53: Improving inter-linac DMLC IMRT dose precision by fine tuning of MLC leaf calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakonechny, Keith; Tran, Muoi; Sasaki, David
Purpose: To develop a method to improve the inter-linac precision of DMLC IMRT dosimetry. Methods: The distance between opposing MLC leaf banks (“gap size”) can be finely tuned on Varian linacs. The dosimetric effect due to small deviations from the nominal gap size (“gap error”) was studied by introducing known errors for several DMLC sliding gap sizes, and for clinical plans based on the TG119 test cases. The plans were delivered on a single Varian linac and the relationship between gap error and the corresponding change in dose was measured. The plans were also delivered on eight Varian 2100 seriesmore » linacs (at two institutions) in order to quantify the inter-linac variation in dose before and after fine tuning the MLC calibration. Results: The measured dose differences for each field agreed well with the predictions of LoSasso et al. Using the default MLC calibration, the variation in the physical MLC gap size was determined to be less than 0.4 mm between all linacs studied. The dose difference between the linacs with the largest and smallest physical gap was up to 5.4% (spinal cord region of the head and neck TG119 test case). This difference was reduced to 2.5% after fine tuning the MLC gap calibration. Conclusions: The inter-linac dose precision for DMLC IMRT on Varian linacs can be improved using a simple modification of the MLC calibration procedure that involves fine adjustment of the nominal gap size.« less
Close encounters: an examination of UFO experiences.
Spanos, N P; Cross, P A; Dickson, K; DuBreuil, S C
1993-11-01
Ss who reported UFO experiences were divided into those whose experiences were nonintense (e.g., seeing lights and shapes in the sky) and those whose experiences were intense (e.g., seeing and communicating with aliens or missing time). On a battery of objective tests Ss in these 2 groups did not score as more psychopathological, less intelligent, or more fantasy prone and hypnotizable than a community comparison group or a student comparison group. However, Ss in the UFO groups believed more strongly in space alien visitation than did comparison Ss. The UFO experiences of Ss in the intense group were more frequently sleep-related than the experiences of Ss in the nonintense group. Among the combined UFO Ss, intensity of UFO experiences correlated significantly with inventories that assessed proneness toward fantasy and unusual sensory experiences. Implications are discussed.
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Error-proneness as a handicap signal.
De Jaegher, Kris
2003-09-21
This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.
Jurgens, Anneke; Anderson, Angelika; Moore, Dennis W
2012-01-01
To investigate the integrity with which parents and carers implement PECS in naturalistic settings, utilizing a sample of videos obtained from YouTube. Twenty-one YouTube videos meeting selection criteria were identified. The videos were reviewed for instances of seven implementer errors and, where appropriate, presence of a physical prompter. Forty-three per cent of videos and 61% of PECS exchanges contained errors in parent implementation of specific teaching strategies of the PECS training protocol. Vocal prompts, incorrect error correction and the absence of timely reinforcement occurred most frequently, while gestural prompts, insistence on speech, incorrect use of the open hand prompt and not waiting for the learner to initiate occurred less frequently. Results suggest that parents engage in vocal prompting and incorrect use of the 4-step error correction strategy when using PECS with their children, errors likely to result in prompt dependence.
Dynamic power scheduling system for JPEG2000 delivery over wireless networks
NASA Astrophysics Data System (ADS)
Martina, Maurizio; Vacca, Fabrizio
2003-06-01
Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.
Identification of User Facility Related Publications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Robert M; Stahl, Christopher G; Wells, Jack C
2012-01-01
Scientific user facilities provide physical resources and technical support that enable scientists to conduct experiments or simulations pertinent to their respective research. One metric for evaluating the scientific value or impact of a facility is the number of publications by users as a direct result of using that facility. Unfortunately, for a variety of reasons, capturing accurate values for this metric proves time consuming and error-prone. This work describes a new approach that leverages automated browser technology combined with text analytics to reduce the time and error involved in identifying publications related to user facilities. With this approach, scientific usermore » facilities gain more accurate measures of their impact as well as insight into policy revisions for user access.« less
Frequent Visitors: Somatization in School-Age Children and Implications for School Nurses
ERIC Educational Resources Information Center
Shannon, Robin Adair; Bergren, Martha Dewey; Matthews, Alicia
2010-01-01
There is a gap in the nursing literature regarding children who frequently visit school nurses' offices with recurrent unexplained physical symptoms. A review of the scientific health literature was undertaken to examine the clinical presentation, associated variables, and implications for school nurses regarding children who are frequent school…
Interactions and Localization of Escherichia coli Error-Prone DNA Polymerase IV after DNA Damage.
Mallik, Sarita; Popodi, Ellen M; Hanson, Andrew J; Foster, Patricia L
2015-09-01
Escherichia coli's DNA polymerase IV (Pol IV/DinB), a member of the Y family of error-prone polymerases, is induced during the SOS response to DNA damage and is responsible for translesion bypass and adaptive (stress-induced) mutation. In this study, the localization of Pol IV after DNA damage was followed using fluorescent fusions. After exposure of E. coli to DNA-damaging agents, fluorescently tagged Pol IV localized to the nucleoid as foci. Stepwise photobleaching indicated ∼60% of the foci consisted of three Pol IV molecules, while ∼40% consisted of six Pol IV molecules. Fluorescently tagged Rep, a replication accessory DNA helicase, was recruited to the Pol IV foci after DNA damage, suggesting that the in vitro interaction between Rep and Pol IV reported previously also occurs in vivo. Fluorescently tagged RecA also formed foci after DNA damage, and Pol IV localized to them. To investigate if Pol IV localizes to double-strand breaks (DSBs), an I-SceI endonuclease-mediated DSB was introduced close to a fluorescently labeled LacO array on the chromosome. After DSB induction, Pol IV localized to the DSB site in ∼70% of SOS-induced cells. RecA also formed foci at the DSB sites, and Pol IV localized to the RecA foci. These results suggest that Pol IV interacts with RecA in vivo and is recruited to sites of DSBs to aid in the restoration of DNA replication. DNA polymerase IV (Pol IV/DinB) is an error-prone DNA polymerase capable of bypassing DNA lesions and aiding in the restart of stalled replication forks. In this work, we demonstrate in vivo localization of fluorescently tagged Pol IV to the nucleoid after DNA damage and to DNA double-strand breaks. We show colocalization of Pol IV with two proteins: Rep DNA helicase, which participates in replication, and RecA, which catalyzes recombinational repair of stalled replication forks. Time course experiments suggest that Pol IV recruits Rep and that RecA recruits Pol IV. These findings provide in vivo evidence that Pol IV aids in maintaining genomic stability not only by bypassing DNA lesions but also by participating in the restoration of stalled replication forks. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Peusschers, Elsie; Twine, Jaryth; Wheeler, Amanda; Moudgil, Vikas; Patterson, Sue
2015-04-01
To describe completeness and accuracy of recording medication changes in progress notes during psychiatric inpatient admissions. A retrospective audit of records of 54 randomly selected psychiatric admissions at a metropolitan tertiary hospital. Medication changes recorded on National Inpatient Medication Chart (NIMC) were compared to documentation in the clinical progress records and assessed for completeness against seven quality criteria. With between one and 32 medication changes per admission, a total of 519 changes were recorded in NIMCs. Just over half were documented in progress notes. Psychotropic and regular medications were more frequently charted than 'other' and 'if required' medications. Documentation was seldom comprehensive. Medication name was most frequently documented; desired therapeutic effect or potential adverse effects were rarely documented. Evidence of patient involvement in, and an explicit rationale for, a change were infrequently recorded. Revealing substantial gaps in communication about medication changes during psychiatric admission, this audit sheds light on a previously undescribed source of medication error, warranting attention. Further research is needed to examine barriers to best practice, to support design and implementation of quality improvement activities but in the interim, attention should be addressed to development and articulation of content and procedures for documentation. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Clinical Characteristics of Exacerbation-Prone Adult Asthmatics Identified by Cluster Analysis.
Kim, Mi Ae; Shin, Seung Woo; Park, Jong Sook; Uh, Soo Taek; Chang, Hun Soo; Bae, Da Jeong; Cho, You Sook; Park, Hae Sim; Yoon, Ho Joo; Choi, Byoung Whui; Kim, Yong Hoon; Park, Choon Sik
2017-11-01
Asthma is a heterogeneous disease characterized by various types of airway inflammation and obstruction. Therefore, it is classified into several subphenotypes, such as early-onset atopic, obese non-eosinophilic, benign, and eosinophilic asthma, using cluster analysis. A number of asthmatics frequently experience exacerbation over a long-term follow-up period, but the exacerbation-prone subphenotype has rarely been evaluated by cluster analysis. This prompted us to identify clusters reflecting asthma exacerbation. A uniform cluster analysis method was applied to 259 adult asthmatics who were regularly followed-up for over 1 year using 12 variables, selected on the basis of their contribution to asthma phenotypes. After clustering, clinical profiles and exacerbation rates during follow-up were compared among the clusters. Four subphenotypes were identified: cluster 1 was comprised of patients with early-onset atopic asthma with preserved lung function, cluster 2 late-onset non-atopic asthma with impaired lung function, cluster 3 early-onset atopic asthma with severely impaired lung function, and cluster 4 late-onset non-atopic asthma with well-preserved lung function. The patients in clusters 2 and 3 were identified as exacerbation-prone asthmatics, showing a higher risk of asthma exacerbation. Two different phenotypes of exacerbation-prone asthma were identified among Korean asthmatics using cluster analysis; both were characterized by impaired lung function, but the age at asthma onset and atopic status were different between the two. Copyright © 2017 The Korean Academy of Asthma, Allergy and Clinical Immunology · The Korean Academy of Pediatric Allergy and Respiratory Disease
Is propensity to obesity associated with the diurnal pattern of core body temperature?
Hynd, P I; Czerwinski, V H; McWhorter, T J
2014-02-01
Obesity affects more than half a billion people worldwide, but the underlying causes remain unresolved. It has been proposed that propensity to obesity may be associated with differences between individuals in metabolic efficiency and in the energy used for homeothermy. It has also been suggested that obese-prone individuals differ in their responsiveness to circadian rhythms. We investigated both these hypotheses by measuring the core body temperature at regular and frequent intervals over a diurnal cycle, using indigestible temperature loggers in two breeds of canines known to differ in propensity to obesity, but prior to divergence in fatness. Greyhounds (obesity-resistant) and Labradors (obesity-prone) were fed indigestible temperature loggers. Gastrointestinal temperature was recorded at 10-min intervals for the period of transit of the logger. Diet, body condition score, activity level and environment were similar for both groups. Energy digestibility was also measured. The mean core body temperature in obesity-resistant dogs (38.27 °C) was slightly higher (P<0.001) than in obesity-prone dogs (38.18 °C) and the former had a greater variation (P<0.001) in 24h circadian core temperature. There were no differences in diet digestibility. Canines differing in propensity to obesity, but prior to its onset, differed little in mean core temperature, supporting similar findings in already-obese and lean humans. Obese-prone dogs were less variable in daily core temperature fluctuations, suggestive of a degree of circadian decoupling.
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Clinical review: Medication errors in critical care
Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas
2008-01-01
Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883
Frequency of pediatric medication administration errors and contributing factors.
Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda
2011-01-01
This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.
Eric E. Knapp; Jamie M. Lydersen; Malcolm P. North; Brandon M. Collins
2017-01-01
Frequent-fire forests were historically characterized by lower tree density, a higher proportion of pine species, and greater within-stand spatial variability, compared to many contemporary forests where fire has been excluded. As a result, such forests are now increasingly unstable, prone to uncharacteristically severe wildfire or high levels of tree mortality in...
2017-02-15
Maunz2 Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone...information processors have been demonstrated experimentally using superconducting circuits1–3, electrons in semiconductors4–6, trapped atoms and...qubit quantum information processor has been realized14, and single- qubit gates have demonstrated randomized benchmarking (RB) infidelities as low as 10
The preliminary SOL (Sizing and Optimization Language) reference manual
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1989-01-01
The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.
NASA Technical Reports Server (NTRS)
Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat
2008-01-01
This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.
The Implications of Self-Reporting Systems for Maritime Domain Awareness
2006-12-01
SIA), offrent des avantages significatifs comparativement à la poursuite des navires par détecteur ordinaire et que la disponibilité de l’information...reporting system for sea-going vessels that originated in Sweden in the early 1990s. It was designed primarily for safety of life at sea (SOLAS) and...report information is prone to human error and potential malicious altering and the system itself was not designed with these vulnerabilities in mind
Disclosing harmful medical errors to patients: tackling three tough cases.
Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B
2009-09-01
A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.
Livneh, Zvi
2006-09-01
To overcome DNA lesions that block replication the cell employs translesion DNA synthesis (TLS) polymerases, a group of low fidelity DNA polymerases that have the capacity to bypass a wide range of DNA lesions. This TLS process is also termed error-prone repair, due to its inherent mutagenic nature. We have recently shown that the tumor suppressor p53 and the cell cycle inhibitor p21 are global regulators of TLS. When these proteins are missing or nonfunctional, TLS gets out of control: its extent increases to very high levels, and its fidelity decreases, causing an overall increase in mutation load. This may be explained by the loss of selectivity in the bypass of specific DNA lesions by their cognate specialized polymerases, such that lesion bypass continues to a maximum, regardless of the price paid in increased mutations. The p53 and p21 proteins are also required for efficient UV light-induced monoubiquitination of PCNA, which is consistent with a model in which this modification of PCNA is necessary but not sufficient for the normal activity of TLS. This regulation suggests that TLS evolved in mammals as a system that balances gain in survival with a tolerable mutational cost, and that disturbing this balance causes a potentially harmful increase in mutations, which might play a role in carcinogenesis.
Choi, Jung-Suk; Dasari, Anvesh; Hu, Peter; Benkovic, Stephen J.; Berdis, Anthony J.
2016-01-01
This report evaluates the pro-mutagenic behavior of 8-oxo-guanine (8-oxo-G) by quantifying the ability of high-fidelity and specialized DNA polymerases to incorporate natural and modified nucleotides opposite this lesion. Although high-fidelity DNA polymerases such as pol δ and the bacteriophage T4 DNA polymerase replicating 8-oxo-G in an error-prone manner, they display remarkably low efficiencies for TLS compared to normal DNA synthesis. In contrast, pol η shows a combination of high efficiency and low fidelity when replicating 8-oxo-G. These combined properties are consistent with a pro-mutagenic role for pol η when replicating this DNA lesion. Studies using modified nucleotide analogs show that pol η relies heavily on hydrogen-bonding interactions during translesion DNA synthesis. However, nucleobase modifications such as alkylation to the N2 position of guanine significantly increase error-prone synthesis catalyzed by pol η when replicating 8-oxo-G. Molecular modeling studies demonstrate the existence of a hydrophobic pocket in pol η that participates in the increased utilization of certain hydrophobic nucleotides. A model is proposed for enhanced pro-mutagenic replication catalyzed by pol η that couples efficient incorporation of damaged nucleotides opposite oxidized DNA lesions created by reactive oxygen species. The biological implications of this model toward increasing mutagenic events in lung cancer are discussed. PMID:26717984
Huang, Suzhen; Xue, Tingli; Wang, Zhiquan; Ma, Yuanyuan; He, Xueting; Hong, Jiefang; Zou, Shaolan; Song, Hao; Zhang, Minhua
2018-04-01
Furfural-tolerant strain is essential for the fermentative production of biofuels or chemicals from lignocellulosic biomass. In this study, Zymomonas mobilis CP4 was for the first time subjected to error-prone PCR-based whole genome shuffling, and the resulting mutants F211 and F27 that could tolerate 3 g/L furfural were obtained. The mutant F211 under various furfural stress conditions could rapidly grow when the furfural concentration reduced to 1 g/L. Meanwhile, the two mutants also showed higher tolerance to high concentration of glucose than the control strain CP4. Genome resequencing revealed that the F211 and F27 had 12 and 13 single-nucleotide polymorphisms. The activity assay demonstrated that the activity of NADH-dependent furfural reductase in mutant F211 and CP4 was all increased under furfural stress, and the activity peaked earlier in mutant than in control. Also, furfural level in the culture of F211 was also more rapidly decreased. These indicate that the increase in furfural tolerance of the mutants may be resulted from the enhanced NADH-dependent furfural reductase activity during early log phase, which could lead to an accelerated furfural detoxification process in mutants. In all, we obtained Z. mobilis mutants with enhanced furfural and high concentration of glucose tolerance, and provided valuable clues for the mechanism of furfural tolerance and strain development.
de Andrade, H H; Marques, E K; Schenberg, A C; Henriques, J A
1989-06-01
The induction of mitotic gene conversion and crossing-over in Saccharomyces cerevisiae diploid cells homozygous for the pso4-1 mutation was examined in comparison to the corresponding wild-type strain. The pso4-1 mutant strain was found to be completely blocked in mitotic recombination induced by photoaddition of mono- and bifunctional psoralen derivatives as well as by mono- (HN1) and bifunctional (HN2) nitrogen mustards or 254 nm UV radiation in both stationary and exponential phases of growth. Concerning the lethal effect, diploids homozygous for the pso4-1 mutation are more sensitive to all agents tested in any growth phase. However, this effect is more pronounced in the G2 phase of the cell cycle. These results imply that the ploidy effect and the resistance of budding cells are under the control of the PSO4 gene. On the other hand, the pso4-1 mutant is mutationally defective for all agents used. Therefore, the pso4-1 mutant has a generalized block in both recombination and mutation ability. This indicates that the PSO4 gene is involved in an error-prone repair pathway which relies on a recombinational mechanism, strongly suggesting an analogy between the pso4-1 mutation and the RecA or LexA mutation of Escherichia coli.
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galavis, P; Barbee, D; Jozsef, G
2016-06-15
Purpose: Prone accelerated partial breast irradiation (APBI) results in dose reduction to the heart and lung. Flattening filter free beams (FFF) reduce out of field dose due to the reduced scatter from the removal of the flattening filter and reduce the buildup region. The aim of this work is to evaluate the dosimetric advantages of FFF beams to prone APBI target coverage and reduction in dose to organs at risk. Methods: Fifteen clinical prone APBI cases using flattened photon beams were retrospectively re-planned in Eclipse-TPS using FFF beams. FFF plans were designed to provide equivalent target coverage with similar hotspotsmore » using the same field arrangements, resulting in comparable target DVHs. Both plans were transferred to a prone breast phantom and delivered on Varian-Edge-Linac. GafChromic-film was placed in the coronal plane of the phantom, partially overlapping the treatment field and extending into OARs to compare dose profiles from both plans. Results: FFF plans were comparable to the clinical plans with maximum doses of (108.3±2.3)% and (109.2±2.4)% and mean doses of (104.5±1.0)% and (104.6±1.2)%, respectively. Similar mean dose doses to the heart and contralateral lungs were observed from both plans, whereas the mean dose to the contra-lateral breast was (2.79±1.18) cGy and (2.86±1.40) cGy for FFF and clinical plans respectively. However for both plans the error between calculated and measured doses at 4 cm from the field edge was 10%. Conclusion: The results showed that FFF beams in prone APBI provide dosimetrically equivalent target coverage and improved coverage in superficial target due to softer energy spectra. Film analysis showed that the TPS underestimates dose outside field edges for both cases. The FFF measured plans showed less dose outside the beam that might reduce the probability of secondary cancers in the contralateral breast.« less
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Thompson, Larry H.; Hinz, John M.
2009-01-01
The Fanconi anemia (FA) molecular network consists of 15 “FANC” proteins, of which 13 are associated with mutations in patients with this cancer-prone chromosome instability disorder. Whereas historically the common phenotype associated with FA mutations is marked sensitivity to DNA interstrand crosslinking agents, the literature supports a more global role for FANC proteins in coping with diverse stresses encountered by replicative polymerases. We have attempted to reconcile and integrate numerous observations into a model in which FANC proteins coordinate the following physiological events during DNA crosslink repair: (a) activating a FANCM-ATR-dependent S-phase checkpoint; (b) mediating enzymatic replication-fork breakage and crosslink unhooking; (c) filling the resulting gap by translesion synthesis (TLS) by error-prone polymerase(s); and (d) restoring the resulting one-ended double-strand break by homologous recombination repair (HRR). The FANC core subcomplex (FANCA, B, C, E, F, G, L, FAAP100) promotes TLS for both crosslink and non-crosslink damage such as spontaneous oxidative base damage, UV-C photoproducts, and alkylated bases. TLS likely helps prevent stalled replication forks from breaking, thereby maintaining chromosome continuity. Diverse DNA damages and replication inhibitors result in monoubiquitination of the FANCD2-FANCI complex by the FANCL ubiquitin ligase activity of the core subcomplex upon its recruitment to chromatin by the FANCM-FAAP24 heterodimeric translocase. We speculate that this translocase activity acts as the primary damage sensor and helps remodel blocked replication forks to facilitate checkpoint activation and repair. Monoubiquitination of FANCD2-FANCI is needed for promoting HRR, in which the FANCD1/BRCA2 and FANCN/PALB2 proteins act at an early step. We conclude that the core subcomplex is required for both TLS and HRR occurring separately for non-crosslink damages and for both events during crosslink repair. The FANCJ/BRIP1/BACH1 helicase functions in association with BRCA1 and may remove structural barriers to replication, such as guanine quadruplex structures, and/or assist in crosslink unhooking. PMID:19622404
Tracking Progress in Improving Diagnosis: A Framework for Defining Undesirable Diagnostic Events.
Olson, Andrew P J; Graber, Mark L; Singh, Hardeep
2018-01-29
Diagnostic error is a prevalent, harmful, and costly phenomenon. Multiple national health care and governmental organizations have recently identified the need to improve diagnostic safety as a high priority. A major barrier, however, is the lack of standardized, reliable methods for measuring diagnostic safety. Given the absence of reliable and valid measures for diagnostic errors, we need methods to help establish some type of baseline diagnostic performance across health systems, as well as to enable researchers and health systems to determine the impact of interventions for improving the diagnostic process. Multiple approaches have been suggested but none widely adopted. We propose a new framework for identifying "undesirable diagnostic events" (UDEs) that health systems, professional organizations, and researchers could further define and develop to enable standardized measurement and reporting related to diagnostic safety. We propose an outline for UDEs that identifies both conditions prone to diagnostic error and the contexts of care in which these errors are likely to occur. Refinement and adoption of this framework across health systems can facilitate standardized measurement and reporting of diagnostic safety.
Dual Processing and Diagnostic Errors
ERIC Educational Resources Information Center
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
ERIC Educational Resources Information Center
Quibell, T.; Charlton, J.; Law, J.
2017-01-01
Gaps in education attainment between high and low achieving children in the primary school years are frequently evidenced in educational reports. Linked to social disadvantage, these gaps have detrimental long-term effects on learning. There is a need to close the gap in attainment by addressing barriers to learning and offering alternative…
The child self-refraction study results from urban Chinese children in Guangzhou.
He, Mingguang; Congdon, Nathan; MacKenzie, Graeme; Zeng, Yangfa; Silver, Joshua D; Ellwein, Leon
2011-06-01
To compare visual and refractive outcomes between self-refracting spectacles (Adaptive Eyecare, Ltd, Oxford, UK), noncycloplegic autorefraction, and cycloplegic subjective refraction. Cross-sectional study. Chinese school-children aged 12 to 17 years. Children with uncorrected visual acuity ≤ 6/12 in either eye underwent measurement of the logarithm of the minimum angle of resolution visual acuity, habitual correction, self-refraction without cycloplegia, autorefraction with and without cycloplegia, and subjective refraction with cycloplegia. Proportion of children achieving corrected visual acuity ≥ 6/7.5 with each modality; difference in spherical equivalent refractive error between each of the modalities and cycloplegic subjective refractive error. Among 556 eligible children of consenting parents, 554 (99.6%) completed self-refraction (mean age, 13.8 years; 59.7% girls; 54.0% currently wearing glasses). The proportion of children with visual acuity ≥ 6/7.5 in the better eye with habitual correction, self-refraction, noncycloplegic autorefraction, and cycloplegic subjective refraction were 34.8%, 92.4%, 99.5% and 99.8%, respectively (self-refraction versus cycloplegic subjective refraction, P<0.001). The mean difference between cycloplegic subjective refraction and noncycloplegic autorefraction (which was more myopic) was significant (-0.328 diopter [D]; Wilcoxon signed-rank test P<0.001), whereas cycloplegic subjective refraction and self-refraction did not differ significantly (-0.009 D; Wilcoxon signed-rank test P = 0.33). Spherical equivalent differed by ≥ 1.0 D in either direction from cycloplegic subjective refraction more frequently among right eyes for self-refraction (11.2%) than noncycloplegic autorefraction (6.0%; P = 0.002). Self-refraction power that differed by ≥ 1.0 D from cycloplegic subjective refractive error (11.2%) was significantly associated with presenting without spectacles (P = 0.011) and with greater absolute power of both spherical (P = 0.025) and cylindrical (P = 0.022) refractive error. Self-refraction seems to be less prone to accommodative inaccuracy than noncycloplegic autorefraction, another modality appropriate for use in areas where access to eye care providers is limited. Visual results seem to be comparable. Greater cylindrical power is associated with less accurate results; the adjustable glasses used in this study cannot correct astigmatism. Further studies of the practical applications of this modality are warranted. Proprietary or commercial disclosure may be found after the references. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluation of drug administration errors in a teaching hospital
2012-01-01
Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837
Evaluation of drug administration errors in a teaching hospital.
Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre
2012-03-12
Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.
Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin
2016-12-01
This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.
Parenting styles as a tobacco-use protective factor among Brazilian adolescents.
Tondowski, Cláudia S; Bedendo, André; Zuquetto, Carla; Locatelli, Danilo P; Opaleye, Emérita S; Noto, Ana R
2015-12-01
The objective was to evaluate the relationship between tobacco use (previous month and frequent use), parenting styles and parental smoking behavior in a sample of high school students. Participants were recruited from public and private high schools from 27 Brazilian state capitals (N = 17,246). The overall prevalence of tobacco use in life was 25.2%; 15.3% in the previous year; 8.6% in the previous month; and 3.2% for frequent use. Tobacco use by the parents was reported by 28.6% of the students. Regarding parenting styles, 39.2% were classified as negligent, 33.3% authoritative, 15.6% as indulgent and 11.9% authoritarian. Compared to adolescents with authoritative parents, those with negligent or indulgent parents were more prone to report tobacco use during the last month or frequent use. This study showed an association between parenting styles and tobacco use by high school students. Authoritative parents were associated with protection from frequent and previous month tobacco use among adolescents.
Errors in veterinary practice: preliminary lessons for building better veterinary teams.
Kinnison, T; Guile, D; May, S A
2015-11-14
Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, H; Gao, Y; Liu, T
Purpose: To develop quantitative clinical guidelines between supine Deep Inspiratory Breath Hold (DIBH) and prone free breathing treatments for breast patients, we applied 3D deformable phantoms to perform Monte Carlo simulation to predict corresponding Dose to the Organs at Risk (OARs). Methods: The RPI-adult female phantom (two selected cup sizes: A and D) was used to represent the female patient, and it was simulated using the MCNP6 Monte Carlo code. Doses to OARs were investigated for supine DIBH and prone treatments, considering two breast sizes. The fluence maps of the 6-MV opposed tangential fields were exported. In the Monte Carlomore » simulation, the fluence maps allow each simulated photon particle to be weighed in the final dose calculation. The relative error of all dose calculations was kept below 5% by simulating 3*10{sup 7} photons for each projection. Results: In terms of dosimetric accuracy, the RPI Adult Female phantom with cup size D in DIBH positioning matched with a DIBH treatment plan of the patient. Based on the simulation results, for cup size D phantom, prone positioning reduced the cardiac dose and the dose to other OARs, while cup size A phantom benefits more from DIBH positioning. Comparing simulation results for cup size A and D phantom, dose to OARs was generally higher for the large breast size due to increased scattering arising from a larger portion of the body in the primary beam. The lower dose that was registered for the heart in the large breast phantom in prone positioning was due to the increase of the distance between the heart and the primary beam when the breast was pendulous. Conclusion: Our 3D deformable phantom appears an excellent tool to predict dose to the OARs for the supine DIBH and prone positions, which might help quantitative clinical decisions. Further investigation will be conducted. National Institutes of Health R01EB015478.« less
Kuikka, Liisa; Pitkälä, Kaisu
2014-01-01
Abstract Objective. To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Design. Questionnaire-based survey (self-assessment) conducted in 2011. Setting. Finnish primary practice offices in Southern Finland. Subjects. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5years, n = 85) and experienced (working experience > 5 years, n = 80). Main outcome measures. Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. Results. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Conclusion. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. Implications. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research. PMID:24914458
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Parkhurst, Justin
2016-07-20
The field of cognitive psychology has increasingly provided scientific insights to explore how humans are subject to unconscious sources of evidentiary bias, leading to errors that can affect judgement and decision-making. Increasingly these insights are being applied outside the realm of individual decision-making to the collective arena of policy-making as well. A recent editorial in this journal has particularly lauded the work of the World Bank for undertaking an open and critical reflection on sources of unconscious bias in its own expert staff that could undermine achievement of its key goals. The World Bank case indeed serves as a remarkable case of a global policy-making agency making its own critical reflections transparent for all to see. Yet the recognition that humans are prone to cognitive errors has been known for centuries, and the scientific exploration of such biases provided by cognitive psychology is now well-established. What still remains to be developed, however, is a widespread body of work that can inform efforts to institutionalise strategies to mitigate the multiple sources and forms of evidentiary bias arising within administrative and policy-making environments. Addressing this gap will require a programme of conceptual and empirical work that supports robust development and evaluation of institutional bias mitigation strategies. The cognitive sciences provides a scientific basis on which to proceed, but a critical priority will now be the application of that science to improve policy-making within those agencies taking responsibility for social welfare and development programmes. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Accidents of Electrical and Mechanical Works for Public Sector Projects in Hong Kong.
Wong, Francis K W; Chan, Albert P C; Wong, Andy K D; Hon, Carol K H; Choi, Tracy N Y
2018-03-10
A study on electrical and mechanical (E&M) works-related accidents for public sector projects provided the opportunity to gain a better understanding of the causes of accidents by analyzing the circumstances of all E&M works accidents. The research aims to examine accidents of E&M works which happened in public sector projects. A total of 421 E&M works-related accidents in the "Public Works Programme Construction Site Safety and Environmental Statistics" (PCSES) system were extracted for analysis. Two-step cluster analysis was conducted to classify the E&M accidents into different groups. The results identified three E&M accidents groups: (1) electricians with over 15 years of experience were prone to 'fall of person from height'; (2) electricians with zero to five years of experience were prone to 'slip, trip or fall on same level'; (3) air-conditioning workers with zero to five years of experience were prone to multiple types of accidents. Practical measures were recommended for each specific cluster group to avoid recurrence of similar accidents. The accident analysis would be vital for industry practitioners to enhance the safety performance of public sector projects. This study contributes to filling the knowledge gap of how and why E&M accidents occur and promulgating preventive measures for E&M accidents which have been under researched.
Accidents of Electrical and Mechanical Works for Public Sector Projects in Hong Kong
Wong, Francis K. W.; Chan, Albert P. C.; Wong, Andy K. D.; Choi, Tracy N. Y.
2018-01-01
A study on electrical and mechanical (E&M) works-related accidents for public sector projects provided the opportunity to gain a better understanding of the causes of accidents by analyzing the circumstances of all E&M works accidents. The research aims to examine accidents of E&M works which happened in public sector projects. A total of 421 E&M works-related accidents in the “Public Works Programme Construction Site Safety and Environmental Statistics” (PCSES) system were extracted for analysis. Two-step cluster analysis was conducted to classify the E&M accidents into different groups. The results identified three E&M accidents groups: (1) electricians with over 15 years of experience were prone to ‘fall of person from height’; (2) electricians with zero to five years of experience were prone to ‘slip, trip or fall on same level’; (3) air-conditioning workers with zero to five years of experience were prone to multiple types of accidents. Practical measures were recommended for each specific cluster group to avoid recurrence of similar accidents. The accident analysis would be vital for industry practitioners to enhance the safety performance of public sector projects. This study contributes to filling the knowledge gap of how and why E&M accidents occur and promulgating preventive measures for E&M accidents which have been under researched. PMID:29534429
What a Smile Means: Contextual Beliefs and Facial Emotion Expressions in a Non-verbal Zero-Sum Game
Pádua Júnior, Fábio P.; Prado, Paulo H. M.; Roeder, Scott S.; Andrade, Eduardo B.
2016-01-01
Research into the authenticity of facial emotion expressions often focuses on the physical properties of the face while paying little attention to the role of beliefs in emotion perception. Further, the literature most often investigates how people express a pre-determined emotion rather than what facial emotion expressions people strategically choose to express. To fill these gaps, this paper proposes a non-verbal zero-sum game – the Face X Game – to assess the role of contextual beliefs and strategic displays of facial emotion expression in interpersonal interactions. This new research paradigm was used in a series of three studies, where two participants are asked to play the role of the sender (individual expressing emotional information on his/her face) or the observer (individual interpreting the meaning of that expression). Study 1 examines the outcome of the game with reference to the sex of the pair, where senders won more frequently when the pair was comprised of at least one female. Study 2 examines the strategic display of facial emotion expressions. The outcome of the game was again contingent upon the sex of the pair. Among female pairs, senders won the game more frequently, replicating the pattern of results from study 1. We also demonstrate that senders who strategically express an emotion incongruent with the valence of the event (e.g., smile after seeing a negative event) are able to mislead observers, who tend to hold a congruent belief about the meaning of the emotion expression. If sending an incongruent signal helps to explain why female senders win more frequently, it logically follows that female observers were more prone to hold a congruent, and therefore inaccurate, belief. This prospect implies that while female senders are willing and/or capable of displaying fake smiles, paired-female observers are not taking this into account. Study 3 investigates the role of contextual factors by manipulating female observers’ beliefs. When prompted to think in an incongruent manner, these observers significantly improve their performance in the game. These findings emphasize the role that contextual factors play in emotion perception—observers’ beliefs do indeed affect their judgments of facial emotion expressions. PMID:27148142
Towards fault tolerant adiabatic quantum computation.
Lidar, Daniel A
2008-04-25
I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology involving dynamical decoupling, subsystem and stabilizer codes, and energy gaps. Corresponding error bounds are derived. As an example, I show how to perform decoherence-protected AQC against local noise using at most two-body interactions.
ERIC Educational Resources Information Center
Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.
2012-01-01
Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…
NASA Astrophysics Data System (ADS)
Park, Jisang
In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.
Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep
2017-06-12
Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.
Development and Validation of an Individualism-Collectivism Scale
1984-07-01
different parts of the world, and came to a similar conclusion: mental illness is more frequent in socially disintegrated communities and among people...neighborhood, people who have lost their parents at an early age, and individuals in unhappy marriages are more prone to some kinds of mental illness than...problems. Social Problems and Social Support Besides mental health , social problem in a society may also be affected by the presence or absence of social
Fidelity of DNA Replication in Normal and Malignant Human Breast Cells
1998-07-01
synthesome has been extensively demonstrated to carry out full length DNA replication in vitro, and to accurately depict the DNA replication process as it...occurs in the intact cell. By examining the fidelity of the DNA replication process carried out by the DNA synthesome from a number of breast cell types...we have demonstrated for the first time, that the cellular DNA replication machinery of malignant human breast cells is significantly more error-prone than that of non- malignant human breast cells.
Addition to the Lewis Chemical Equilibrium Program to allow computation from coal composition data
NASA Technical Reports Server (NTRS)
Sevigny, R.
1980-01-01
Changes made to the Coal Gasification Project are reported. The program was developed by equilibrium combustion in rocket engines. It can be applied directly to the entrained flow coal gasification process. The particular problem addressed is the reduction of the coal data into a form suitable to the program, since the manual process is involved and error prone. A similar problem in relating the normal output of the program to parameters meaningful to the coal gasification process is also addressed.
Effects of Non-Normal Outlier-Prone Error Distribution on Kalman Filter Track
1991-09-01
other possibilities exist. For example the GST (Generic Statistical Tracker) uses four motion models [Ref. 41. The GST keeps track of both the target...1.011 + + + 3.113 1.291 4 Although this procedure is not easily statistically interpretable, it was used for the sake of comparison with the other... TRANSITOR TARGET’ WRITE(6,*)’ 3 SECOND ORDER GAUSS MARKOV TARGET’ WRITE(6,*)’ 4 RANDOM TOUR TARGET’ READ(6,*) CHOICE IF((CHOICE.LT.1).OR.(CHOICE.GT.4
Computer Aided Software Engineering (CASE) Environment Issues.
1987-06-01
tasks tend to be error prone and slowv when done by humans . Ti-.c,. are e’.el nt anidates for automation using a computer. (MacLennan. 10S1. p. 51 2...CASE r,’sourCcs; * human resources. Lonsisting of the people who use and facilitate utilization in !:1e case of manual resource, of the environment...engineering process in a given er,%irent rnizthe nature of rnanua! and human resources. CA.SU_ -esources should provide the softwvare enizincerin2 team
Artificial Intelligence for VHSIC Systems Design (AIVD) User Reference Manual
1988-12-01
The goal of this program was to develop prototype tools which would use artificial intelligence techniques to extend the Architecture Design and Assessment (ADAS) software capabilities. These techniques were applied in a number of ways to increase the productivity of ADAS users. AIM will reduce the amount of time spent on tedious, negative, and error-prone steps. It will also provide f documentation that will assist users in varifying that the models they build are correct Finally, AIVD will help make ADAS models more reusable.
The OPL Access Control Policy Language
NASA Astrophysics Data System (ADS)
Alm, Christopher; Wolf, Ruben; Posegga, Joachim
Existing policy languages suffer from a limited ability of directly and elegantly expressing high-level access control principles such as history-based separation of duty [22], binding of duty [26], context constraints [24], Chinese wall properties [10], and obligations [20]. It is often difficult to extend a language in order to retrofit these features once required or it is necessary to use complicated and complex language constructs to express such concepts. The latter, however, is cumbersome and error-prone for humans dealing with policy administration.
A filtering method to generate high quality short reads using illumina paired-end technology.
Eren, A Murat; Vineis, Joseph H; Morrison, Hilary G; Sogin, Mitchell L
2013-01-01
Consensus between independent reads improves the accuracy of genome and transcriptome analyses, however lack of consensus between very similar sequences in metagenomic studies can and often does represent natural variation of biological significance. The common use of machine-assigned quality scores on next generation platforms does not necessarily correlate with accuracy. Here, we describe using the overlap of paired-end, short sequence reads to identify error-prone reads in marker gene analyses and their contribution to spurious OTUs following clustering analysis using QIIME. Our approach can also reduce error in shotgun sequencing data generated from libraries with small, tightly constrained insert sizes. The open-source implementation of this algorithm in Python programming language with user instructions can be obtained from https://github.com/meren/illumina-utils.
Gamification of Clinical Routine: The Dr. Fill Approach.
Bukowski, Mark; Kühn, Martin; Zhao, Xiaoqing; Bettermann, Ralf; Jonas, Stephan
2016-01-01
Gamification is used in clinical context in the health care education. Furthermore, it has shown great promises to improve the performance of the health care staff in their daily routine. In this work we focus on the medication sorting task, which is performed manually in hospitals. This task is very error prone and needs to be performed daily. Nevertheless, errors in the medication are crucial and lead to serious complications. In this work we present a real world gamification approach of the medication sorting task in a patient's daily pill organizer. The player of the game needs to sort the correct medication into the correct dispenser slots and is rewarded or punished in real time. At the end of the game, a score is given and the user can register in a leaderboard.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
Does Menstruation Explain Gender Gaps in Work Absenteeism?
ERIC Educational Resources Information Center
Herrmann, Mariesa A.; Rockoff, Jonah E.
2012-01-01
Ichino and Moretti (2009) find that menstruation may contribute to gender gaps in absenteeism and earnings, based on evidence that absences of young female Italian bank employees follow a 28-day cycle. We find this evidence is not robust to the correction of coding errors or small changes in specification, and we find no evidence of increased…
Tatsumi, Daisaku; Nakada, Ryosei; Ienaga, Akinori; Yomoda, Akane; Inoue, Makoto; Ichida, Takao; Hosono, Masako
2012-01-01
The tolerance of the Backup diaphragm (Backup JAW) setting in Elekta linac was specified as 2 mm according to the AAPM TG-142 report. However, the tolerance and the quality assurance procedure for volumetric modulated arc therapy (VMAT) was not provided. This paper describes positional accuracy and quality assurance procedure of the Backup JAWs required for VMAT. It was found that a gap-width error of the Backup JAW by a sliding window test needed to be less than 1.5 mm for prostate VMAT delivery. It was also confirmed that the gap-widths had been maintained with an error of 0.2 mm during the past one year.
Predicted Errors In Children's Early Sentence Comprehension
Gertner, Yael; Fisher, Cynthia
2012-01-01
Children use syntax to interpret sentences and learn verbs; this is syntactic bootstrapping. The structure-mapping account of early syntactic bootstrapping proposes that a partial representation of sentence structure, the set of nouns occurring with the verb, guides initial interpretation and provides an abstract format for new learning. This account predicts early successes, but also telltale errors: Toddlers should be unable to tell transitive sentences from other sentences containing two nouns. In testing this prediction, we capitalized on evidence that 21-month-olds use what they have learned about noun order in English sentences to understand new transitive verbs. In two experiments, 21-month-olds applied this noun-order knowledge to two-noun intransitive sentences, mistakenly assigning different interpretations to “The boy and the girl are gorping!” and “The girl and the boy are gorping!”. This suggests that toddlers exploit partial representations of sentence structure to guide sentence interpretation; these sparse representations are useful, but error-prone. PMID:22525312
Metrics to quantify the importance of mixing state for CCN activity
Ching, Joseph; Fast, Jerome; West, Matthew; ...
2017-06-21
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
The Sizing and Optimization Language, (SOL): Computer language for design problems
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1988-01-01
The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.
Lockhart, Joseph J; Satya-Murti, Saty
2017-11-01
Cognitive effort is an essential part of both forensic and clinical decision-making. Errors occur in both fields because the cognitive process is complex and prone to bias. We performed a selective review of full-text English language literature on cognitive bias leading to diagnostic and forensic errors. Earlier work (1970-2000) concentrated on classifying and raising bias awareness. Recently (2000-2016), the emphasis has shifted toward strategies for "debiasing." While the forensic sciences have focused on the control of misleading contextual cues, clinical debiasing efforts have relied on checklists and hypothetical scenarios. No single generally applicable and effective bias reduction strategy has emerged so far. Generalized attempts at bias elimination have not been particularly successful. It is time to shift focus to the study of errors within specific domains, and how to best communicate uncertainty in order to improve decision making on the part of both the expert and the trier-of-fact. © 2017 American Academy of Forensic Sciences.
Metrics to quantify the importance of mixing state for CCN activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Joseph; Fast, Jerome; West, Matthew
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
High Quality Maize Centromere 10 Sequence Reveals Evidence of Frequent Recombination Events
Wolfgruber, Thomas K.; Nakashima, Megan M.; Schneider, Kevin L.; Sharma, Anupma; Xie, Zidian; Albert, Patrice S.; Xu, Ronghui; Bilinski, Paul; Dawe, R. Kelly; Ross-Ibarra, Jeffrey; Birchler, James A.; Presting, Gernot G.
2016-01-01
The ancestral centromeres of maize contain long stretches of the tandemly arranged CentC repeat. The abundance of tandem DNA repeats and centromeric retrotransposons (CR) has presented a significant challenge to completely assembling centromeres using traditional sequencing methods. Here, we report a nearly complete assembly of the 1.85 Mb maize centromere 10 from inbred B73 using PacBio technology and BACs from the reference genome project. The error rates estimated from overlapping BAC sequences are 7 × 10−6 and 5 × 10−5 for mismatches and indels, respectively. The number of gaps in the region covered by the reassembly was reduced from 140 in the reference genome to three. Three expressed genes are located between 92 and 477 kb from the inferred ancestral CentC cluster, which lies within the region of highest centromeric repeat density. The improved assembly increased the count of full-length CR from 5 to 55 and revealed a 22.7 kb segmental duplication that occurred approximately 121,000 years ago. Our analysis provides evidence of frequent recombination events in the form of partial retrotransposons, deletions within retrotransposons, chimeric retrotransposons, segmental duplications including higher order CentC repeats, a deleted CentC monomer, centromere-proximal inversions, and insertion of mitochondrial sequences. Double-strand DNA break (DSB) repair is the most plausible mechanism for these events and may be the major driver of centromere repeat evolution and diversity. In many cases examined here, DSB repair appears to be mediated by microhomology, suggesting that tandem repeats may have evolved to efficiently repair frequent DSBs in centromeres. PMID:27047500
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
The Data Gap in the EHR for Clinical Research Eligibility Screening.
Butler, Alex; Wei, Wei; Yuan, Chi; Kang, Tian; Si, Yuqi; Weng, Chunhua
2018-01-01
Much effort has been devoted to leverage EHR data for matching patients into clinical trials. However, EHRs may not contain all important data elements for clinical research eligibility screening. To better design research-friendly EHRs, an important step is to identify data elements frequently used for eligibility screening but not yet available in EHRs. This study fills this knowledge gap. Using the Alzheimer's disease domain as an example, we performed text mining on the eligibility criteria text in Clinicaltrials.gov to identify frequently used eligibility criteria concepts. We compared them to the EHR data elements of a cohort of Alzheimer's Disease patients to assess the data gap by usingthe OMOP Common Data Model to standardize the representations for both criteria concepts and EHR data elements. We identified the most common SNOMED CT concepts used in Alzheimer 's Disease trials, andfound 40% of common eligibility criteria concepts were not even defined in the concept space in the EHR dataset for a cohort of Alzheimer 'sDisease patients, indicating a significant data gap may impede EHR-based eligibility screening. The results of this study can be useful for designing targeted research data collection forms to help fill the data gap in the EHR.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
Crossing the Generational Divide: Supporting Generational Differences at Work
ERIC Educational Resources Information Center
Berl, Patricia Scallan
2006-01-01
Differences in attitudes and behaviors, regularly exhibited between youth and their elders, are frequently referred to as the "generation gap". On the job, these generational distinctions are becoming increasingly complex as "multi-generation gaps" emerge, with three or more generations defining roles and expectations, each vying for positions in…