Buchini, Sara; Quattrin, Rosanna
2012-04-01
To record the frequency of interruptions and their causes, to identify 'avoidable' interruptions and to build an improvement project to reduce 'avoidable' interruptions. In Italy each year 30,000-35,000 deaths per year are attributed to health-care system errors, of which 19% are caused by medication errors. The factors that contribute to drug management error also include interruptions and carelessness during treatment administration. A descriptive study design was used to record the frequency of interruptions and their causes and to identify 'avoidable' interruptions in an intensive rehabilitation ward in Northern Italy. A data collection grid was used to record the data over a 6-month period. A total of 3000 work hours were observed. During the study period 1170 interruptions were observed. The study identified 14 causes of interruption. The study shows that of the 14 cases of interruptions at least nine can be defined as 'avoidable'. An improvement project has been proposed to reduce unnecessary interruptions and distractions to avoid making errors. An additional useful step to reduce the incidence of treatment errors would be to implement the use of a single patient medication sheet for the recording of drug prescription, preparation and administration and also the incident reporting. © 2011 Blackwell Publishing Ltd.
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.
Survey of childhood blindness and visual impairment in Botswana.
Nallasamy, Sudha; Anninger, William V; Quinn, Graham E; Kroener, Brian; Zetola, Nicola M; Nkomazana, Oathokwa
2011-10-01
In terms of blind-person years, the worldwide burden of childhood blindness is second only to cataracts. In many developing countries, 30-72% of childhood blindness is avoidable. The authors conducted this study to determine the causes of childhood blindness and visual impairment (VI) in Botswana, a middle-income country with limited access to ophthalmic care. This study was conducted over 4 weeks in eight cities and villages in Botswana. Children were recruited through a radio advertisement and local outreach programmes. Those ≤ 15 years of age with visual acuity <6/18 in either eye were enrolled. The WHO/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision was used to record data. The authors enrolled 241 children, 79 with unilateral and 162 with bilateral VI. Of unilateral cases, 89% were avoidable: 23% preventable (83% trauma-related) and 66% treatable (40% refractive error and 31% amblyopia). Of bilateral cases, 63% were avoidable: 5% preventable and 58% treatable (33% refractive error and 31% congenital cataracts). Refractive error, which is easily correctable with glasses, is the most common cause of bilateral VI, with cataracts a close second. A nationwide intervention is currently being planned to reduce the burden of avoidable childhood VI in Botswana.
Continuous Process Improvement Transformation Guidebook
2006-05-01
except full-scale im- plementation. Error Proofing ( Poka Yoke ) Finding and correcting defects caused by errors costs more and more as a system or...proofing. Shigeo Shingo introduced the concept of Poka - Yoke at Toyota Motor Corporation. Poka Yoke (pronounced “poh-kah yoh-kay”) translates to “avoid
Survey of childhood blindness and visual impairment in Botswana
Nallasamy, Sudha; Anninger, William V; Quinn, Graham E; Kroener, Brian; Zetola, Nicola M; Nkomazana, Oathokwa
2014-01-01
Background/aims In terms of blind-person years, the worldwide burden of childhood blindness is second only to cataracts. In many developing countries, 30–72% of childhood blindness is avoidable. The authors conducted this study to determine the causes of childhood blindness and visual impairment (VI) in Botswana, a middle-income country with limited access to ophthalmic care. Methods This study was conducted over 4 weeks in eight cities and villages in Botswana. Children were recruited through a radio advertisement and local outreach programmes. Those ≤15 years of age with visual acuity <6/18 in either eye were enrolled. The WHO/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision was used to record data. Results The authors enrolled 241 children, 79 with unilateral and 162 with bilateral VI. Of unilateral cases, 89% were avoidable: 23% preventable (83% trauma-related) and 66% treatable (40% refractive error and 31% amblyopia). Of bilateral cases, 63% were avoidable: 5% preventable and 58% treatable (33% refractive error and 31% congenital cataracts). Conclusion Refractive error, which is easily correctable with glasses, is the most common cause of bilateral VI, with cataracts a close second. A nationwide intervention is currently being planned to reduce the burden of avoidable childhood VI in Botswana. PMID:21242581
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Theoretical and experimental errors for in situ measurements of plant water potential.
Shackel, K A
1984-07-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Investigation of technology needs for avoiding helicopter pilot error related accidents
NASA Technical Reports Server (NTRS)
Chais, R. I.; Simpson, W. E.
1985-01-01
Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.
Driving safely into the future with applied technology
DOT National Transportation Integrated Search
1999-10-01
Driver error remains the leading cause of highway crashes. Through the Intelligent Vehicle Initiative (IVI), the Department of Transportation hopes to reduce crashes by helping drivers avoid hazardous mistakes. IVI aims to accelerate the development ...
Looking for trouble? Diagnostics expanding disease and producing patients.
Hofmann, Bjørn
2018-05-23
Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui
2017-01-01
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui
2017-05-04
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.
Air quality impacts of intercity freight. Volume 1 : guidebook
DOT National Transportation Integrated Search
2000-01-01
Driver error remains the leading cause of highway crashes. Through the Intelligent Vehicle Initiative (IVI), the Department of Transportation hopes to reduce crashes by helping drivers avoid hazardous mistakes. IVI aims to accelerate the development ...
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
NASA Astrophysics Data System (ADS)
Sumule, U.; Amin, S. M.; Fuad, Y.
2018-01-01
This study aims to determine the types and causes of errors, as well as efforts being attempted to overcome the mistakes made by junior high school students in completing PISA content space and shape. Two subjects were selected based on the mathematical ability test results with the most error, yet they are able to communicate orally and in writing. Two selected subjects then worked on the PISA ability test question and the subjects were interviewed to find out the type and cause of the error and then given a scaffolding based on the type of mistake made.The results of this study obtained the type of error that students do are comprehension and transformation error. The reasons are students was not able to identify the keywords in the question, write down what is known or given, specify formulas or device a plan. To overcome this error, students were given scaffolding. Scaffolding that given to overcome misunderstandings were reviewing and restructuring. While to overcome the transformation error, scaffolding given were reviewing, restructuring, explaining and developing representational tools. Teachers are advised to use scaffolding to resolve errors so that the students are able to avoid these errors.
Evidence for aversive withdrawal response to own errors.
Hochman, Eldad Yitzhak; Milman, Valery; Tal, Liron
2017-10-01
Recent model suggests that error detection gives rise to defensive motivation prompting protective behavior. Models of active avoidance behavior predict it should grow larger with threat imminence and avoidance. We hypothesized that in a task requiring left or right key strikes, error detection would drive an avoidance reflex manifested by rapid withdrawal of an erring finger growing larger with threat imminence and avoidance. In experiment 1, three groups differing by error-related threat imminence and avoidance performed a flanker task requiring left or right force sensitive-key strikes. As predicted, errors were followed by rapid force release growing faster with threat imminence and opportunity to evade threat. In experiment 2, we established a link between error key release time (KRT) and the subjective sense of inner-threat. In a simultaneous, multiple regression analysis of three error-related compensatory mechanisms (error KRT, flanker effect, error correction RT), only error KRT was significantly associated with increased compulsive checking tendencies. We propose that error response withdrawal reflects an error-withdrawal reflex. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Sources of medical error in refractive surgery.
Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B
2013-05-01
To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.
[National survey of blindness and avoidable visual impairment in Argentina, 2013].
Barrenechea, Rosario; de la Fuente, Inés; Plaza, Roberto Gustavo; Flores, Nadia; Segovia, Lía; Villagómez, Zaida; Camarero, Esteban Elián; Zepeda-Romero, Luz Consuelo; Lansingh, Van C; Limburg, Hans; Silva, Juan Carlos
2015-01-01
Determine the prevalence of blindness and avoidable visual impairment in Argentina, its causes, the coverage of cataract surgery, and the barriers that hinder access to these services. Cross-sectional population study conducted between May and November 2013 using the standard methodology for rapid assessment of avoidable blindness (RAAB), with a random cluster sampling of 50 people aged 50 years or more, -representative of the entire country. Participants' visual acuity (VA) was measured and the lens and posterior pole were examined by direct ophthalmoscopy. An assessment was made of the causes of having VA < 20/60, the coverage and quality of cataract surgery, and the barriers to accessing treatment. 3 770 people were assessed (92.0% of the projected number). The prevalence of blindness was 0.7% (confidence interval of 95%: 0.4-1.0%). Unoperated cataract was the main cause of blindness and severe visual impairment (44.0% and 71.1%, respectively), while the main cause of moderate visual impairment was uncorrected refractive errors (77.8%). Coverage of cataract surgery was of 97.1%, and 82.0% of operated eyes achieved VA ≥ 20/60. The main barriers to receiving this treatment were fear of the surgical procedure or of a poor result (34.9%), the cost (30.2%), and not having access to the treatment (16.3%). There is a low prevalence of blindness in the studied population and cataract is the main cause of blindness and severe visual impairment. Efforts should continue to extend coverage of cataract surgery, enhance preoperative evaluation, improve calculations of the intraocular lenses that patients need, and correct post-operative refractive errors with greater precision.
Generating Evidence for Program Planning: Rapid Assessment of Avoidable Blindness in Bangladesh.
Muhit, Mohammad; Wadud, Zakia; Islam, Johurul; Khair, Zareen; Shamanna, B R; Jung, Jenny; Khandaker, Gulam
2016-06-01
There is a lack of data on the prevalence and causes of blindness in Bangladesh, which is important to plan effective eye health programs and advocate support services to achieve the goals of Vision 2020. We conducted a rapid assessment of avoidable blindness (RAAB) in 8 districts of Bangladesh (January 2010 - December 2012) to establish the prevalence and causes of blindness. People aged ≥50 years were selected, and eligible participants had visual acuity (VA) measured. Ocular examinations were performed in those with VA<6/18. Additional information was collected for those who had or had not undergone cataract surgery to understand service barriers and quality of service. In total, 21,596 people were examined, of which 471 (2.2%, 95% confidence interval, CI, 2.0-2.4%) were blind. The primary cause of blindness was cataract (75.8%). The majority of blindness (86.2%) was avoidable. Cataract and refractive error were the primary causes of severe visual impairment (73.6%) and moderate visual impairment (63.6%), respectively. Cataract surgical coverage for blind persons was 69.3% (males 76.6%, females 64.3%, P<0.001). The magnitude of blindness among people aged ≥50 years was estimated to be 563,200 people (95% CI 512,000-614,400), of whom 426,342 had un-operated cataract. In Bangladesh, the majority of blindness (86.2%) among people aged ≥50 years was avoidable, and cataract was the most important cause of avoidable blindness. Improving cataract surgical services and refraction services would be the most important step towards the elimination of avoidable blindness in Bangladesh.
Prevention 0f Unwanted Free-Declaration of Static Obstacles in Probability Occupancy Grids
NASA Astrophysics Data System (ADS)
Krause, Stefan; Scholz, M.; Hohmann, R.
2017-10-01
Obstacle detection and avoidance are major research fields in unmanned aviation. Map based obstacle detection approaches often use discrete world representations such as probabilistic grid maps to fuse incremental environment data from different views or sensors to build a comprehensive representation. The integration of continuous measurements into a discrete representation can result in rounding errors which, in turn, leads to differences between the artificial model and real environment. The cause of these deviations is a low spatial resolution of the world representation comparison to the used sensor data. Differences between artificial representations which are used for path planning or obstacle avoidance and the real world can lead to unexpected behavior up to collisions with unmapped obstacles. This paper presents three approaches to the treatment of errors that can occur during the integration of continuous laser measurement in the discrete probabilistic grid. Further, the quality of the error prevention and the processing performance are compared with real sensor data.
Medication errors: an overview for clinicians.
Wittich, Christopher M; Burkle, Christopher M; Lanier, William L
2014-08-01
Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Staubach, Maria
2009-09-01
This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.
Guidelines for Teaching the Holocaust: Avoiding Common Pedagogical Errors
ERIC Educational Resources Information Center
Lindquist, David H.
2006-01-01
Teaching the Holocaust is a complex undertaking involving twists and turns that can frustrate and even intimidate educators who teach the Holocaust. This complexity involves both the event's history and its pedagogy. In this article, the author considers eight pedagogical approaches that often cause problems in teaching the event. He states each…
Drug error in paediatric anaesthesia: current status and where to go now.
Anderson, Brian J
2018-06-01
Medication errors in paediatric anaesthesia and the perioperative setting continue to occur despite widespread recognition of the problem and published advice for reduction of this predicament at international, national, local and individual levels. Current literature was reviewed to ascertain drug error rates and to appraise causes and proposed solutions to reduce these errors. The medication error incidence remains high. There is documentation of reduction through identification of causes with consequent education and application of safety analytics and quality improvement programs in anaesthesia departments. Children remain at higher risk than adults because of additional complexities such as drug dose calculations, increased susceptibility to some adverse effects and changes associated with growth and maturation. Major improvements are best made through institutional system changes rather than a commitment to do better on the part of each practitioner. Medication errors in paediatric anaesthesia represent an important risk to children and most are avoidable. There is now an understanding of the genesis of adverse drug events and this understanding should facilitate the implementation of known effective countermeasures. An institution-wide commitment and strategy are the basis for a worthwhile and sustained improvement in medication safety.
Das, Taraprasad
2018-03-13
The International Agency for Prevention of Blindness (IAPB) South East Asia region (SEAR) that consists of 11 countries contains 26% of the world's population (1,761,000,000). In this region 12 million are blind and 78.5 million are visually impaired. This amounts to 30% of global blindness and 32% of global visual impairment. Rapid assessment of avoidable blindness (RAAB) survey analysis. RAAB, either a repeat or a first time survey, was completed in 8 countries in this decade (2010 onwards). These include Bangladesh, Bhutan, India, Indonesia, Maldives, Sri Lanka, Thailand, and Timor Leste. Cataract is the principal cause of blindness and severe visual impairment in all countries. Refractive error is the principal cause of moderate visual impairment in 4 countries: Bangladesh, India, Maldives, and Sri Lanka; cataract continues to be the principal cause of moderate visual impairment in 4 other countries: Bhutan, Indonesia, Thailand, and Timor Leste. Outcome of cataract surgery is suboptimal in the Maldives and Timor Leste. Rigorous focus is necessary to improve cataract surgery outcomes and correction of refractive error without neglecting the quality of care. At the same time allowances must be made for care of the emerging causes of visual impairment and blindness such as glaucoma and posterior segment disorders, particularly diabetic retinopathy. Copyright 2018 Asia-Pacific Academy of Ophthalmology.
Mendis, Daylath; Hewage, Kasun N; Wrzesniewski, Joanna
2013-10-01
The Canadian construction industry generates 30% of the total municipal solid waste deposited in landfills. Ample evidence can be found in the published literature about rework and waste generation due to ambiguity and errors in contract documents. Also, the literature quotes that disclaimer clauses in contract documents are included in the contractual agreements to prevent contractor claims, which often cause rework. Our professional practice has also noted that there are several disclaimer clauses in standard contract documents which have the potential to cause rework (and associated waste). This article illustrates a comparative study of standard contractual documents and their potential to create rework (and associated waste) in different regions of the world. The objectives of this study are (1) to analyse standard contractual documents in Canada, the USA and Australia in terms of their potential to generate rework and waste, and (2) to propose changes/amendments to the existing standard contract documents to minimise/avoid rework. In terms of construction waste management, all the reviewed standard contract documents have deficiencies. The parties that produce the contract documents include exculpatory clauses to avoid the other party's claims. This approach tends to result in rework and construction waste. The contractual agreements/contract documents should be free from errors, deficiencies, ambiguity and unfair risk transfers to minimise/avoid potential to generate rework and waste.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Abdullah, Ayesha S; Jadoon, Milhammad Zahid; Akram, Mohammad; Awan, Zahid Hussain; Azam, Mohammad; Safdar, Mohammad; Nigar, Mohammad
2015-01-01
Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of <6/6) further examination was done to establish the cause for the subnormal vision. A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorpora'ted into the health care delivery system.
[Survey on avoidable blindness and visual impairment in Panama].
López, Maritza; Brea, Ileana; Yee, Rita; Yi, Rodolfo; Carles, Víctor; Broce, Alberto; Limburg, Hans; Silva, Juan Carlos
2014-12-01
Determine prevalence of blindness and visual impairment in adults aged ≥ 50 years in Panama, identify their main causes, and characterize eye health services. Cross-sectional population study using standard Rapid Assessment of Avoidable Blindness methodology. Fifty people aged ≥ 50 years were selected from each of 84 clusters chosen through representative random sampling of the entire country. Visual acuity was assessed using a Snellen chart; lens and posterior pole status were assessed by direct ophthalmoscopy. Cataract surgery coverage was calculated and its quality assessed, along with causes of visual acuity < 20/60 and barriers to access to surgical treatment. A total of 4 125 people were examined (98.2% of the calculated sample). Age- and sex-adjusted prevalence of blindness was 3.0% (95% CI: 2.3-3.6). The main cause of blindness was cataract (66.4%), followed by glaucoma (10.2%). Cataract (69.2%) was the main cause of severe visual impairment and uncorrected refractive errors were the main cause of moderate visual impairment (60.7%). Surgical cataract coverage in individuals was 76.3%. Of all eyes operated for cataract, 58.0% achieved visual acuity ≤ 20/60 with available correction. Prevalence of blindness in Panama is in line with average prevalence found in other countries of the Region. This problem can be reduced, since 76.2% of cases of blindness and 85.0% of cases of severe visual impairment result from avoidable causes.
Retrospective data on causes of childhood vision impairment in Eritrea.
Gyawali, Rajendra; Bhayal, Bharat Kumar; Adhikary, Rabindra; Shrestha, Arjun; Sah, Rabindra Prasad
2017-11-22
Proper information on causes of childhood vision loss is essential in developing appropriate strategies and programs to address such causes. This study aimed at identifying the causes of vision loss in children attending the national referral eye hospital with the only pediatric ophthalmology service in Eritrea. A retrospective data review was conducted for all the children (< 16 years of age) who attended Berhan Aiyni National Referral Eye Hospital in five years period from January 2011 to December 2015. Causes of vision loss for children with vision impairment (recorded visual acuity less than 6/18 for distance in the better eye) was classified by the anatomical site affected and by underlying etiology based on the timing of the insult and causal factor. The medical record cards of 22,509 children were reviewed, of whom 249 (1.1%) were visually impaired. The mean age of the participants was 7.82 ± 5.43 years (range: one month to 16 years) and male to female ratio was 1:0.65. The leading causes of vision loss were cataract (19.7%), corneal scars (15.7%), refractive error and amblyopia (12.1%), optic atrophy (6.4%), phthisis bulbi (6.4%), aphakia (5.6%) and glaucoma (5.2%). Childhood factors including trauma were the leading causes identified (34.5%) whereas other causes included hereditary factors (4%), intrauterine factors (2.0%) and perinatal factors (4.4%). In 55.0% of the children, the underlying etiology could not be attributed. Over two-thirds (69.9%) of vision loss was potentially avoidable in nature. This study explored the causes of vision loss in Eritrean children using hospital based data. Cataract corneal opacities, refractive error and amblyopia, globe damage due to trauma, infection and nutritional deficiency, retinal disorders, and other congenital abnormalities were the leading causes of childhood vision impairment in children attending the tertiary eye hospital in Eritrea. As majority of the causes of vision loss was due to avoidable causes, we recommended primary level public health strategies to prevent ocular injuries, vitamin A deficiency, perinatal infections and retinopathy of prematurity as well as specialist pediatric eye care facilities for cataract, refractive errors, glaucoma and rehabilitative services to address childhood vision loss in Eritrea.
Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B
2017-02-15
Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.
Reflection of medical error highlighted on media in Turkey: A retrospective study
Isik, Oguz; Bayin, Gamze; Ugurluoglu, Ozgur
2016-01-01
Objective: This study was performed with the aim of identifying how news on medical errors have be transmitted, and how the types, reasons, and conclusions of medical errors have been reflected to by the media in Turkey. Methods: A content analysis method was used in the study, and in this context, the data for the study was acquired by scanning five newspapers with the top editions on the national basis between the years 2012 and 2015 for the news about medical errors. Some specific selection criteria was used for the scanning of resulted news, and 116 news items acquired as a result of all the eliminations. Results: According to the results of the study; the vast majority of medical errors (40.5%) transmitted by the news resulted from the negligence of the medical staff. The medical errors were caused by physicians in the ratio of 74.1%, they most commonly occurred in state hospitals (31.9%). Another important result of the research was that medical errors resulted in either patient death to a large extent (51.7%), or permanent damage and disability to patients (25.0%). Conclusion: The news concerning medical errors provided information about the types, causes, and the results of these medical errors. It also reflected the media point of view on the issue. The examination of the content of the medical errors reported by the media were important which calls for appropriate interventions to avoid and minimize the occurrence of medical errors by improving the healthcare delivery system. PMID:27882026
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Categorization-based stranger avoidance does not explain the uncanny valley effect.
MacDorman, Karl F; Chattopadhyay, Debaleena
2017-04-01
The uncanny valley hypothesis predicts that an entity appearing almost human risks eliciting cold, eerie feelings in viewers. Categorization-based stranger avoidance theory identifies the cause of this feeling as categorizing the entity into a novel category. This explanation is doubtful because stranger is not a novel category in adults; infants do not avoid strangers while the category stranger remains novel; infants old enough to fear strangers prefer photographs of strangers to those more closely resembling a familiar person; and the uncanny valley's characteristic eeriness is seldom felt when meeting strangers. We repeated our original experiment with a more realistic 3D computer model and found no support for categorization-based stranger avoidance theory. By contrast, realism inconsistency theory explains cold, eerie feelings elicited by transitions between instances of two different, mutually exclusive categories, given that at least one category is anthropomorphic: Cold, eerie feelings are caused by prediction error from perceiving some features as features of the first category and other features as features of the second category. In principle, realism inconsistency theory can explain not only negative evaluations of transitions between real and computer modeled humans but also between different vertebrate species. Copyright © 2017 Elsevier B.V. All rights reserved.
Communication errors in radiology - Pitfalls and how to avoid them.
Waite, Stephen; Scott, Jinel Moore; Drexler, Ian; Martino, Jennifer; Legasto, Alan; Gale, Brian; Kolla, Srinivas
2018-06-07
Communication failures are a common cause of patient harm and malpractice claims against radiologists. In addition to overt communication breakdowns among providers, it is also important to address the quality of communication to optimize patient outcomes. In this review, we describe common communication failures and potential solutions providing a framework for radiologists to improve health care delivery. Copyright © 2018. Published by Elsevier Inc.
Errors in the ultrasound diagnosis of the kidneys, ureters and urinary bladder
Wieczorek, Andrzej Paweł; Tyloch, Janusz F.
2013-01-01
The article presents the most frequent errors made in the ultrasound diagnosis of the urinary system. They usually result from improper technique of ultrasound examination or its erroneous interpretation. Such errors are frequent effects of insufficient experience of the ultrasonographer, inadequate class of the scanner, insufficient knowledge of its operation as well as of wrong preparation of patients, their constitution, severe condition and the lack of cooperation during the examination. The reasons for misinterpretations of ultrasound images of the urinary system may lie in a large polymorphism of the kidney (defects and developmental variants) and may result from improper access to the organ as well as from the presence of artefacts. Errors may also result from the lack of knowledge concerning clinical and laboratory data. Moreover, mistakes in ultrasound diagnosis of the urinary system are frequently related to the lack of knowledge of the management algorithms and diagnostic possibilities of other imaging modalities. The paper lists errors in ultrasound diagnosis of the urinary system divided into: errors resulting from improper technique of examination, artefacts caused by incorrect preparation of patients for the examination or their constitution and errors resulting from misinterpretation of ultrasound images of the kidneys (such as their number, size, fluid spaces, pathological lesions and others), ureters and urinary bladder. Each physician performing kidney or bladder ultrasound examination should possess the knowledge of the most frequent errors and their causes which might help to avoid them. PMID:26674139
Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.
Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun
2014-01-01
In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.
Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco
2017-11-01
Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.
[Cognitive errors in diagnostic decision making].
Gäbler, Martin
2017-10-01
Approximately 10-15% of our diagnostic decisions are faulty and may lead to unfavorable and dangerous outcomes, which could be avoided. These diagnostic errors are mainly caused by cognitive biases in the diagnostic reasoning process.Our medical diagnostic decision-making is based on intuitive "System 1" and analytical "System 2" diagnostic decision-making and can be deviated by unconscious cognitive biases.These deviations can be positively influenced on a systemic and an individual level. For the individual, metacognition (internal withdrawal from the decision-making process) and debiasing strategies, such as verification, falsification and rule out worst-case scenarios, can lead to improved diagnostic decisions making.
Observations of fallibility in applications of modern programming methodologies
NASA Technical Reports Server (NTRS)
Gerhart, S. L.; Yelowitz, L.
1976-01-01
Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.
Physics and Control of Locked Modes in the DIII-D Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volpe, Francesco
This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less
Identifying the causes of road crashes in Europe
Thomas, Pete; Morris, Andrew; Talbot, Rachel; Fagerlind, Helen
2013-01-01
This research applies a recently developed model of accident causation, developed to investigate industrial accidents, to a specially gathered sample of 997 crashes investigated in-depth in 6 countries. Based on the work of Hollnagel the model considers a collision to be a consequence of a breakdown in the interaction between road users, vehicles and the organisation of the traffic environment. 54% of road users experienced interpretation errors while 44% made observation errors and 37% planning errors. In contrast to other studies only 11% of drivers were identified as distracted and 8% inattentive. There was remarkably little variation in these errors between the main road user types. The application of the model to future in-depth crash studies offers the opportunity to identify new measures to improve safety and to mitigate the social impact of collisions. Examples given include the potential value of co-driver advisory technologies to reduce observation errors and predictive technologies to avoid conflicting interactions between road users. PMID:24406942
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Surgical errors and risks – the head and neck cancer patient
Harréus, Ulrich
2013-01-01
Head and neck surgery is one of the basic principles of head and neck cancer therapy. Surgical errors and malpractice can have fatal consequences for the treated patients. It can lead to functional impairment and has impact in future chances for disease related survival. There are many risks for head and neck surgeons that can cause errors and malpractice. To avoid surgical mistakes, thorough preoperative management of patients is mandatory. As there are ensuring operability, cautious evaluation of preoperative diagnostics and operative planning. Moreover knowledge of anatomical structures of the head and neck, of the medical studies and data as well as qualification in modern surgical techniques and the surgeons ability for critical self assessment are basic and important prerequisites for head and neck surgeons in order to make out risks and to prevent from mistakes. Additionally it is important to have profound knowledge in nutrition management of cancer patients, wound healing and to realize and to be able to deal with complications, when they occur. Despite all precaution and surgical care, errors and mistakes cannot always be avoided. For that it is important to be able to deal with mistakes and to establish an appropriate and clear communication and management for such events. The manuscript comments on recognition and prevention of risks and mistakes in the preoperative, operative and postoperative phase of head and neck cancer surgery. PMID:24403972
Caustic Singularities Of High-Gain, Dual-Shaped Reflectors
NASA Technical Reports Server (NTRS)
Galindo, Victor; Veruttipong, Thavath W.; Imbriale, William A.; Rengarajan, Sambiam
1991-01-01
Report presents study of some sources of error in analysis, by geometric theory of diffraction (GTD), of performance of high-gain, dual-shaped antenna reflector. Study probes into underlying analytic causes of singularity, with view toward devising and testing practical methods to avoid problems caused by singularity. Hybrid physical optics (PO) approach used to study near-field spillover or noise-temperature characteristics of high-gain relector antenna efficiently and accurately. Report illustrates this approach and underlying principles by presenting numerical results, for both offset and symmetrical reflector systems, computed by GTD, PO, and PO/GO methods.
NASA Astrophysics Data System (ADS)
Kiso, Atsushi; Murakami, Hiroki; Seki, Hirokazu
This paper describes a novel obstacle avoidance control scheme of electric powered wheelchairs for realizing the safe driving in various environments. The “electric powered wheelchair” which generates the driving force by electric motors is expected to be widely used as a mobility support system for elderly people and disabled people; however, the driving performance must be further improved because the number of driving accidents caused by elderly operator's narrow sight and joystick operation errors is increasing. This paper proposes a novel obstacle avoidance control scheme based on fuzzy algorithm to prevent driving accidents. The proposed control system determines the driving direction by fuzzy algorithm based on the information of the joystick operation and distance to obstacles measured by ultrasonic sensors. Fuzzy rules to determine the driving direction are designed surely to avoid passers-by and walls considering the human's intent and driving environments. Some driving experiments on the practical situations show the effectiveness of the proposed control system.
NASA Technical Reports Server (NTRS)
Madras, Eric I. (Inventor)
1995-01-01
A method and related apparatus for nondestructive evaluation of composite materials by determination of the quantity known as Integrated Polar Backscatter, which avoids errors caused by surface texture left by cloth impressions by identifying frequency ranges associated with peaks in a power spectrum for the backscattered signal, and removing such frequency ranges from the calculation of Integrated Polar Backscatter for all scan sites on the composite material is presented.
Hematocrit Causes the Most Significant Error in Point of Care Glucometers
2009-04-01
pneumonia, as it permits leakage of pharyngeal secretions around the cuff. In our randomized controlled trial (5) of 165 patients included in the...interven- tion group, only 9 (5%) required reintu- bation. A few of these patients could per- haps benefit of the evaluation proposed by Stocchetti et al...and avoid an undue extubation; the duration of mechanical ventilation for the overall patients , how- ever, would have been certainly much higher. The
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
Modeling, Analyzing, and Mitigating Dissonance Between Alerting Systems
NASA Technical Reports Server (NTRS)
Song, Lixia; Kuchar, James K.
2003-01-01
Alerting systems are becoming pervasive in process operations, which may result in the potential for dissonance or conflict in information from different alerting systems that suggests different threat levels and/or actions to resolve hazards. Little is currently available to help in predicting or solving the dissonance problem. This thesis presents a methodology to model and analyze dissonance between alerting systems, providing both a theoretical foundation for understanding dissonance and a practical basis from which specific problems can be addressed. A state-space representation of multiple alerting system operation is generalized that can be tailored across a variety of applications. Based on the representation, two major causes of dissonance are identified: logic differences and sensor error. Additionally, several possible types of dissonance are identified. A mathematical analysis method is developed to identify the conditions for dissonance originating from logic differences. A probabilistic analysis methodology is developed to estimate the probability of dissonance originating from sensor error, and to compare the relative contribution to dissonance of sensor error against the contribution from logic differences. A hybrid model, which describes the dynamic behavior of the process with multiple alerting systems, is developed to identify dangerous dissonance space, from which the process can lead to disaster. Methodologies to avoid or mitigate dissonance are outlined. Two examples are used to demonstrate the application of the methodology. First, a conceptual In-Trail Spacing example is presented. The methodology is applied to identify the conditions for possible dissonance, to identify relative contribution of logic difference and sensor error, and to identify dangerous dissonance space. Several proposed mitigation methods are demonstrated in this example. In the second example, the methodology is applied to address the dissonance problem between two air traffic alert and avoidance systems: the existing Traffic Alert and Collision Avoidance System (TCAS) vs. the proposed Airborne Conflict Management system (ACM). Conditions on ACM resolution maneuvers are identified to avoid dynamic dissonance between TCAS and ACM. Also included in this report is an Appendix written by Lee Winder about recent and continuing work on alerting systems design. The application of Markov Decision Process (MDP) theory to complex alerting problems is discussed and illustrated with an abstract example system.
Computerized Design and Generation of Low-Noise Gears with Localized Bearing Contact
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Chen, Ningxin; Chen, Jui-Sheng; Lu, Jian; Handschuh, Robert F.
1995-01-01
The results of research projects directed at the reduction of noise caused by misalignment of the following gear drives: double-circular arc helical gears, modified involute helical gears, face-milled spiral bevel gears, and face-milled formate cut hypoid gears are presented. Misalignment in these types of gear drives causes periodic, almost linear discontinuous functions of transmission errors. The period of such functions is the cycle of meshing when one pair of teeth is changed for the next. Due to the discontinuity of such functions of transmission errors high vibration and noise are inevitable. A predesigned parabolic function of transmission errors that is able to absorb linear discontinuous functions of transmission errors and change the resulting function of transmission errors into a continuous one is proposed. The proposed idea was successfully tested using spiral bevel gears and the noise was reduced a substantial amount in comparison with the existing design. The idea of a predesigned parabolic function is applied for the reduction of noise of helical and hypoid gears. The effectiveness of the proposed approach has been investigated by developed TCA (tooth contact analysis) programs. The bearing contact for the mentioned gears is localized. Conditions that avoid edge contact for the gear drives have been determined. Manufacturing of helical gears with new topology by hobs and grinding worms has been investigated.
Scheduling periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1987-01-01
One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.
A fatal outcome after unintentional overdosing of rivastigmine patches.
Lövborg, Henrik; Jönsson, Anna K; Hägg, Staffan
2012-02-01
Rivastigmine is an acetylcholine esterase inhibitor used in the treatment of dementia. Patches with rivastigmine for transdermal delivery have been used to increase compliance and to reduce side effects. We describe an 87-year old male with dementia treated with multiple rivastigmine patches (Exelon 9,5 mg/24 h) who developed nausea, vomiting and renal failure with disturbed electrolytes resulting in death. The symptoms occurred after six rivastigmine patches had concomitantly been erroneously applied by health care personnel on two consecutive days. The terminal cause of death was considered to be uremia from an acute tubular necrosis that was assessed as a result of dehydration through vomiting. The rivastigmine intoxication was assessed as having caused or contributed to the dehydrated condition. The medication error occurred at least partly due to ambiguous labeling. The clinical signs were not initially recognized as adverse effects of rivastigmine. The presented case is a description of a rivastigmine overdose due to a medication error involving patches. This case indicates the importance of clear and unambiguous instructions to avoid administration errors with patches and to be vigilant to adverse drug reactions for early detection and correction of drug administration errors. In particular, instructions clearly indicating that only one patch should be applied at a time are important.
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.
Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer
2015-01-01
Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.
Dysfunctional health service conflict: causes and accelerants.
Nelson, H Wayne
2012-01-01
This article examines the causes and accelerants of dysfunctional health service conflict and how it emerges from the health system's core hierarchical structures, specialized roles, participant psychodynamics, culture, and values. This article sets out to answer whether health care conflict is more widespread and intense than in other settings and if it is, why? To this end, health care power, gender, and educational status gaps are examined with an eye to how they undermine open communication, teamwork, and collaborative forms of conflict and spark a range of dysfunctions, including a pervasive culture of fear; the deny-and-defend lawsuit response; widespread patterns of hierarchical, generational, and lateral bullying; overly avoidant conflict styles among non-elite groups; and a range of other behaviors that lead to numerous human resource problems, including burnout, higher staff turnover, increased errors, poor employee citizenship behavior, patient dissatisfaction, increased patient complaints, and lawsuits. Bad patient outcomes include decreased compliance and increased morbidity and mortality. Health care managers must understand the root causes of these problems to treat them at the source and implement solutions that avoid negative conflict spirals that undermine organizational morale and efficiency.
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
Beauty from the beast: Avoiding errors in responding to client questions.
Waehler, Charles A; Grandy, Natalie M
2016-09-01
Those rare moments when clients ask direct questions of their therapists likely represent a point when they are particularly open to new considerations, thereby representing an opportunity for substantial therapeutic gains. However, clinical errors abound in this area because clients' questions often engender apprehension in therapists, causing therapists to respond with too little or too much information or shutting down the discussion prematurely. These response types can damage the therapeutic relationship, the psychotherapy process, or both. We explore the nature of these clinical errors in response to client questions by providing examples from our own clinical work, suggesting potential reasons why clinicians may not make optimal use of client questions, and discussing how the mixed psychological literature further complicates the issue. We also present four guidelines designed to help therapists, trainers, and supervisors respond constructively to clinical questions in order to create constructive interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing
2014-02-01
Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.
NASA Astrophysics Data System (ADS)
Song, K.; Song, H. P.; Gao, C. F.
2018-03-01
It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.
Fabbretti, G
2010-06-01
Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.
[Medication errors in a neonatal unit: One of the main adverse events].
Esqué Ruiz, M T; Moretones Suñol, M G; Rodríguez Miguélez, J M; Sánchez Ortiz, E; Izco Urroz, M; de Lamo Camino, M; Figueras Aloy, J
2016-04-01
Neonatal units are one of the hospital areas most exposed to the committing of treatment errors. A medication error (ME) is defined as the avoidable incident secondary to drug misuse that causes or may cause harm to the patient. The aim of this paper is to present the incidence of ME (including feeding) reported in our neonatal unit and its characteristics and possible causal factors. A list of the strategies implemented for prevention is presented. An analysis was performed on the ME declared in a neonatal unit. A total of 511 MEs have been reported over a period of seven years in the neonatal unit. The incidence in the critical care unit was 32.2 per 1000 hospital days or 20 per 100 patients, of which 0.22 per 1000 days had serious repercussions. The ME reported were, 39.5% prescribing errors, 68.1% administration errors, 0.6% were adverse drug reactions. Around two-thirds (65.4%) were produced by drugs, with 17% being intercepted. The large majority (89.4%) had no impact on the patient, but 0.6% caused permanent damage or death. Nurses reported 65.4% of MEs. The most commonly implicated causal factor was distraction (59%). Simple corrective action (alerts), and intermediate (protocols, clinical sessions and courses) and complex actions (causal analysis, monograph) were performed. It is essential to determine the current state of ME, in order to establish preventive measures and, together with teamwork and good practices, promote a climate of safety. Copyright © 2015 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.
Common errors of drug administration in infants: causes and avoidance.
Anderson, B J; Ellis, J F
1999-01-01
Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.
Burnout: Recognize and Reverse.
Anne, Samantha
2014-07-01
Physician burnout may be underrecognized and can cause significant detrimental effects on personal health and job satisfaction. Burnout has been associated with medical errors, alcohol and drug abuse, and neglect and abandonment of career goals. With self-awareness, development of coping mechanisms, and the adoption of a strong social and professional support network, burnout can be combated. This article focuses on recognizing characteristics of burnout and providing strategies to cope to avoid reaching a high degree of burnout. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Reduction in chemotherapy order errors with computerized physician order entry.
Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J
2014-01-01
To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Clinical Dental Faculty Members' Perceptions of Diagnostic Errors and How to Avoid Them.
Nikdel, Cathy; Nikdel, Kian; Ibarra-Noriega, Ana; Kalenderian, Elsbeth; Walji, Muhammad F
2018-04-01
Diagnostic errors are increasingly recognized as a source of preventable harm in medicine, yet little is known about their occurrence in dentistry. The aim of this study was to gain a deeper understanding of clinical dental faculty members' perceptions of diagnostic errors, types of errors that may occur, and possible contributing factors. The authors conducted semi-structured interviews with ten domain experts at one U.S. dental school in May-August 2016 about their perceptions of diagnostic errors and their causes. The interviews were analyzed using an inductive process to identify themes and key findings. The results showed that the participants varied in their definitions of diagnostic errors. While all identified missed diagnosis and wrong diagnosis, only four participants perceived that a delay in diagnosis was a diagnostic error. Some participants perceived that an error occurs only when the choice of treatment leads to harm. Contributing factors associated with diagnostic errors included the knowledge and skills of the dentist, not taking adequate time, lack of communication among colleagues, and cognitive biases such as premature closure based on previous experience. Strategies suggested by the participants to prevent these errors were taking adequate time when investigating a case, forming study groups, increasing communication, and putting more emphasis on differential diagnosis. These interviews revealed differing perceptions of dental diagnostic errors among clinical dental faculty members. To address the variations, the authors recommend adopting shared language developed by the medical profession to increase understanding.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Medication errors: problems and recommendations from a consensus meeting
Agrawal, Abha; Aronson, Jeffrey K; Britten, Nicky; Ferner, Robin E; de Smet, Peter A; Fialová, Daniela; Fitzgerald, Richard J; Likić, Robert; Maxwell, Simon R; Meyboom, Ronald H; Minuz, Pietro; Onder, Graziano; Schachter, Michael; Velo, Giampaolo
2009-01-01
Here we discuss 15 recommendations for reducing the risks of medication errors: Provision of sufficient undergraduate learning opportunities to make medical students safe prescribers. Provision of opportunities for students to practise skills that help to reduce errors. Education of students about common types of medication errors and how to avoid them. Education of prescribers in taking accurate drug histories. Assessment in medical schools of prescribing knowledge and skills and demonstration that newly qualified doctors are safe prescribers. European harmonization of prescribing and safety recommendations and regulatory measures, with regular feedback about rational drug use. Comprehensive assessment of elderly patients for declining function. Exploration of low-dose regimens for elderly patients and preparation of special formulations as required. Training for all health-care professionals in drug use, adverse effects, and medication errors in elderly people. More involvement of pharmacists in clinical practice. Introduction of integrated prescription forms and national implementation in individual countries. Development of better monitoring systems for detecting medication errors, based on classification and analysis of spontaneous reports of previous reactions, and for investigating the possible role of medication errors when patients die. Use of IT systems, when available, to provide methods of avoiding medication errors; standardization, proper evaluation, and certification of clinical information systems. Nonjudgmental communication with patients about their concerns and elicitation of symptoms that they perceive to be adverse drug reactions. Avoidance of defensive reactions if patients mention symptoms resulting from medication errors. PMID:19594525
Dineen, B; Bourne, R R A; Jadoon, Z; Shah, S P; Khan, M A; Foster, A; Gilbert, C E; Khan, M D
2007-01-01
Objective To determine the causes of blindness and visual impairment in adults (⩾30 years old) in Pakistan, and to explore socio‐demographic variations in cause. Methods A multi‐stage, stratified, cluster random sampling survey was used to select a nationally representative sample of adults. Each subject was interviewed, had their visual acuity measured and underwent autorefraction and fundus/optic disc examination. Those with a visual acuity of <6/12 in either eye underwent a more detailed ophthalmic examination. Causes of visual impairment were classified according to the accepted World Health Organization (WHO) methodology. An exploration of demographic variables was conducted using regression modeling. Results A sample of 16 507 adults (95.5% of those enumerated) was examined. Cataract was the most common cause of blindness (51.5%; defined as <3/60 in the better eye on presentation) followed by corneal opacity (11.8%), uncorrected aphakia (8.6%) and glaucoma (7.1%). Posterior capsular opacification accounted for 3.6% of blindness. Among the moderately visually impaired (<6/18 to ⩾6/60), refractive error was the most common cause (43%), followed by cataract (42%). Refractive error as a cause of severe visual impairment/blindness was significantly higher in rural dwellers than in urban dwellers (odds ratio (OR) 3.5, 95% CI 1.1 to 11.7). Significant provincial differences were also identified. Overall we estimate that 85.5% of causes were avoidable and that 904 000 adults in Pakistan have cataract (<6/60) requiring surgical intervention. Conclusions This comprehensive survey provides reliable estimates of the causes of blindness and visual impairment in Pakistan. Despite expanded surgical services, cataract still accounts for over half of the cases of blindness in Pakistan. One in eight blind adults has visual loss from sequelae of cataract surgery. Services for refractive errors need to be further expanded and integrated into eye care services, particularly those serving rural populations. PMID:17229806
Yousef, Nadin; Yousef, Farah
2017-09-04
Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated medication doses to less than the global standard; as a result, it enhances patient safety. However, we hope other studies will be made later in hospitals to practically evaluate how much effective our proposed systematic strategy really is in comparison with other suggested remedies in this field.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Adaptive color halftoning for minimum perceived error using the blue noise mask
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1997-04-01
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
Imperfect practice makes perfect: error management training improves transfer of learning.
Dyre, Liv; Tabor, Ann; Ringsted, Charlotte; Tolsgaard, Martin G
2017-02-01
Traditionally, trainees are instructed to practise with as few errors as possible during simulation-based training. However, transfer of learning may improve if trainees are encouraged to commit errors. The aim of this study was to assess the effects of error management instructions compared with error avoidance instructions during simulation-based ultrasound training. Medical students (n = 60) with no prior ultrasound experience were randomised to error management training (EMT) (n = 32) or error avoidance training (EAT) (n = 28). The EMT group was instructed to deliberately make errors during training. The EAT group was instructed to follow the simulator instructions and to commit as few errors as possible. Training consisted of 3 hours of simulation-based ultrasound training focusing on fetal weight estimation. Simulation-based tests were administered before and after training. Transfer tests were performed on real patients 7-10 days after the completion of training. Primary outcomes were transfer test performance scores and diagnostic accuracy. Secondary outcomes included performance scores and diagnostic accuracy during the simulation-based pre- and post-tests. A total of 56 participants completed the study. On the transfer test, EMT group participants attained higher performance scores (mean score: 67.7%, 95% confidence interval [CI]: 62.4-72.9%) than EAT group members (mean score: 51.7%, 95% CI: 45.8-57.6%) (p < 0.001; Cohen's d = 1.1, 95% CI: 0.5-1.7). There was a moderate improvement in diagnostic accuracy in the EMT group compared with the EAT group (16.7%, 95% CI: 10.2-23.3% weight deviation versus 26.6%, 95% CI: 16.5-36.7% weight deviation [p = 0.082; Cohen's d = 0.46, 95% CI: -0.06 to 1.0]). No significant interaction effects between group and performance improvements between the pre- and post-tests were found in either performance scores (p = 0.25) or diagnostic accuracy (p = 0.09). The provision of error management instructions during simulation-based training improves the transfer of learning to the clinical setting compared with error avoidance instructions. Rather than teaching to avoid errors, the use of errors for learning should be explored further in medical education theory and practice. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Usage of DNA Fingerprinting Technology for Quality Control in Molecular Lab Bench Work.
McIntosh, Linda Y; Lal, Janella E; Qin, Dahui
2018-01-01
One of the major quality assurance (QA) goals in many molecular laboratories is to avoid sample pipetting errors on the lab bench; especially when pipetting into multiwell plates. A pipetting error can cause a switch in patient samples, which can lead to recording the wrong results for the patient samples involved. Such pipetting errors are difficult to identify when it happens in lab bench work. DNA fingerprinting is a powerful tool in determining sample identities. Our laboratory has explored the usage of this technology in our QA process and successfully established that DNA fingerprinting can be used to monitor possible sample switch in gene rearrangement lab bench work. We use florescent light to quench the florescence in the gene rearrangement polymerase chain reaction products. After that, DNA fingerprinting technology is used to identify the sample DNA in the gene rearrangement polymerase chain reaction plate. The result is compared with the corresponding patient's blood sample DNA to determine whether there is a sample switch during the lab bench work.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
The pitfalls of premature closure: clinical decision-making in a case of aortic dissection
Kumar, Bharat; Kanna, Balavenkatesh; Kumar, Suresh
2011-01-01
Premature closure is a type of cognitive error in which the physician fails to consider reasonable alternatives after an initial diagnosis is made. It is a common cause of delayed diagnosis and misdiagnosis borne out of a faulty clinical decision-making process. The authors present a case of aortic dissection in which premature closure was avoided by the aggressive pursuit of the appropriate differential diagnosis, and discuss the importance of disciplined clinical decision-making in the setting of chest pain. PMID:22679162
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Automated vehicle for railway track fault detection
NASA Astrophysics Data System (ADS)
Bhushan, M.; Sujay, S.; Tushar, B.; Chitra, P.
2017-11-01
For the safety reasons, railroad tracks need to be inspected on a regular basis for detecting physical defects or design non compliances. Such track defects and non compliances, if not detected in a certain interval of time, may eventually lead to severe consequences such as train derailments. Inspection must happen twice weekly by a human inspector to maintain safety standards as there are hundreds and thousands of miles of railroad track. But in such type of manual inspection, there are many drawbacks that may result in the poor inspection of the track, due to which accidents may cause in future. So to avoid such errors and severe accidents, this automated system is designed.Such a concept would surely introduce automation in the field of inspection process of railway track and can help to avoid mishaps and severe accidents due to faults in the track.
Aravind, Gayatri; Lamontagne, Anouk
2017-01-01
Persons with perceptual-attentional deficits due to visuospatial neglect (VSN) after a stroke are at a risk of collisions while walking in the presence of moving obstacles. The attentional burden of performing a dual-task may further compromise their obstacle avoidance performance, putting them at a greater risk of collisions. The objective of this study was to compare the ability of persons with (VSN+) and without VSN (VSN-) to dual task while negotiating moving obstacles. Twenty-six stroke survivors (13 VSN+, 13 VSN-) were assessed on their ability to (a) negotiate moving obstacles while walking (locomotor single task); (b) perform a pitch-discrimination task (cognitive single task) and (c) simultaneously perform the walking and cognitive tasks (dual task). We compared the groups on locomotor (collision rates, minimum distance from obstacle and onset of strategies) and cognitive (error rates) outcomes. For both single and dual task walking, VSN+ individuals showed higher collision rates compared to VSN- individuals. Dual tasking caused deterioration of locomotor (more collisions, delayed onset and smaller minimum distances) and cognitive performances (higher error rate) in VSN+ individuals. Contrastingly, VSN- individuals maintained collision rates, increased minimum distance, but showed more cognitive errors, prioritizing their locomotor performance. Individuals with VSN demonstrate cognitive-locomotor interference under dual task conditions, which could severely compromise safety when ambulating in community environments and may explain the poor recovery of independent community ambulation in these individuals.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J
2016-02-01
Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.
[Tracing the map of medication errors outside the hospital environment in the Madrid Community].
Taravilla-Cerdán, Belén; Larrubia-Muñoz, Olga; de la Corte-García, María; Cruz-Martos, Encarnación
2011-12-01
Preparation of a map of medication errors reported by health professionals outside hospitals within the framework of Medication Errors Reporting for the Community of Madrid during the period 2008-2009. Retrospective observational study. Notification database of medication errors in the Community of Madrid. Notifications sent to the web page: Safe Use of Medicines and Health Products of the Community of Madrid. Information on the originator of the report, date of incident, shift, type of error and causes, outcome, patient characteristics, stage, place where it was produced and detected, if the medication was administered, lot number, expiry date and the general nature of the drug and a brief description of the incident. There were 5470 medication errors analysed, of which 3412 came from outside hospitals (62%), occurring mainly in the prescription stage (56.92%) and being more reported pharmacists. No harm was done in 92.9% of cases, but there was harm in 4.8% and in 2.3% there was an error that could not be followed up. The centralization of information has led to the confirmation that the prescription is a vulnerable point in the chain of drug therapy. Cleaning up prescription databases, preventing the marketing of commercial presentations that give rise to confusion, enhanced information to professionals and patients, and establishing standardised procedures, and avoiding the use of ambiguous prescriptions, illegible, or abbreviations, are useful strategies to try to minimise these errors. Copyright © 2010 Elsevier España, S.L. All rights reserved.
Near Misses in Financial Trading: Skills for Capturing and Averting Error.
Leaver, Meghan; Griffiths, Alex; Reader, Tom
2018-05-01
The aims of this study were (a) to determine whether near-miss incidents in financial trading contain information on the operator skills and systems that detect and prevent near misses and the patterns and trends revealed by these data and (b) to explore if particular operator skills and systems are found as important for avoiding particular types of error on the trading floor. In this study, we examine a cohort of near-miss incidents collected from a financial trading organization using the Financial Incident Analysis System and report on the nontechnical skills and systems that are used to detect and prevent error in this domain. One thousand near-miss incidents are analyzed using distribution, mean, chi-square, and associative analysis to describe the data; reliability is provided. Slips/lapses (52%) and human-computer interface problems (21%) often occur alone and are the main contributors to error causation, whereas the prevention of error is largely a result of teamwork (65%) and situation awareness (46%) skills. No matter the cause of error, situation awareness and teamwork skills are used most often to detect and prevent the error. Situation awareness and teamwork skills appear universally important as a "last line" of defense for capturing error, and data from incident-monitoring systems can be analyzed in a fashion more consistent with a "Safety-II" approach. This research provides data for ameliorating risk within financial trading organizations, with implications for future risk management programs and regulation.
[National survey of blindness and avoidable visual impairment in Honduras].
Alvarado, Doris; Rivera, Belinda; Lagos, Luis; Ochoa, Mayra; Starkman, Ivette; Castillo, Mariela; Flores, Eduardo; Lansingh, Van C; Limburg, Hans; Silva, Juan Carlos
2014-11-01
To determine the prevalence of blindness and visual impairment in Honduras, its causes and the response by the health services to growing demand. A cross-sectional population study was conducted between June and December 2013 using the standard methodology of the Rapid Assessment of Avoidable Blindness. A random sample survey was done in 63 clusters of 50 individuals aged ≥ 50, representative of the country as a whole. Visual acuity (VA) was assessed using a Snellen eye chart, and the condition of the lens and posterior pole was examined by direct ophthalmoscopy. Cataract surgical coverage was calculated and an assessment made of its quality, the causes of VA < 20/60 and the barriers to accessing surgical treatment. A total of 2 999 people were examined (95.2% of the forecast total). Blindness prevalence was 1.9% (confidence interval of 95%: 1.4-2.4%) and 82.2% of these cases were avoidable. The main causes of blindness were unoperated cataracts (59.2%) and glaucoma (21.1%). Uncorrected refraction error was the main cause of severe (19.7%) and moderate (58.6%) visual impairment. Cataract surgical coverage was 75.2%. 62.5% of the eyes operated for cataracts achieved a VA > 20/60 with available correction. The main barriers against cataract surgery were cost (27.7%) and the lack of availability or difficulty of geographical access to the treatment (24.6%). The prevalence of blindness and visual impairment in Honduras is similar to that of other Latin American countries. 67% of cases of blindness could be resolved by improving the response capacity of the ophthalmological services, especially of cataract surgery, improving optician services and incorporating eye care in primary health care.
Preventing medication errors in cancer chemotherapy.
Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M
1996-04-01
Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort.
Navigation of the autonomous vehicle reverse movement
NASA Astrophysics Data System (ADS)
Rachkov, M.; Petukhov, S.
2018-02-01
The paper presents a mathematical formulation of the vehicle reverse motion along a multi-link polygonal trajectory consisting of rectilinear segments interconnected by nodal points. Relevance of the problem is caused by the need to solve a number of tasks: to save the vehicle in the event of а communication break by returning along the trajectory already passed, to avoid a turn on the ground in constrained obstacles or dangerous conditions, or a partial return stroke for the subsequent bypass of the obstacle and continuation of the forward movement. The method of navigation with direct movement assumes that the reverse path is elaborated by using landmarks. To measure landmarks on board, a block of cameras is placed on a vehicle controlled by the operator through the radio channel. Errors in estimating deviation from the nominal trajectory of motion are determined using the multidimensional correlation analysis apparatus based on the dynamics of a lateral deviation error and a vehicle speed error. The result of the experiment showed a relatively high accuracy in determining the state vector that provides the vehicle reverse motion relative to the reference trajectory with a practically acceptable error while returning to the start point.
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
ERIC Educational Resources Information Center
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
Refractive ocular conditions and reasons for spectacles renewal in a resource-limited economy
2010-01-01
Background Although a leading cause of visual impairment and a treatable cause of blindness globally, the pattern of refractive errors in many populations is unknown. This study determined the pattern of refractive ocular conditions, reasons for spectacles renewal and the effect of correction on refractive errors in a resource-limited community. Methods A retrospective review of case records of 1,413 consecutive patients seen in a private optometry practice, Nigeria between January 2006 and July 2007. Results A total number of 1,216 (86.1%) patients comprising of (486, 40%) males and (730, 60%) females with a mean age of 41.02 years SD 14.19 were analyzed. The age distribution peaked at peri-adolescent and the middle age years. The main ocular complaints were spectacles loss and discomfort (412, 33.9%), blurred near vision (399, 32.8%) and asthenopia (255, 20.9%). The mean duration of ocular symptoms before consultation was 2.05 years SD 1.92. The most common refractive errors include presbyopia (431, 35.3%), hyperopic astigmatism (240, 19.7%) and presbyopia with hyperopia (276, 22.7%). Only (59, 4.9%) had myopia. Following correction, there were reductions in magnitudes of the blind (VA<3/60) and visually impaired (VA<6/18-3/60) patients by (18, 58.1%) and (89, 81.7%) respectively. The main reasons for renewal of spectacles were broken lenses/frame/scratched lenses/lenses' falling off (47, 63.4%). Conclusions Adequate correction of refractive errors reduces visual impairment and avoidable blindness and to achieve optimal control of refractive errors in the community, services should be targeted at individuals in the peri-adolescent and the middle age years. PMID:20459676
Refractive ocular conditions and reasons for spectacles renewal in a resource-limited economy.
Ayanniyi, Abdulkabir A; Folorunso, Francisca N; Adepoju, Feyisayo G
2010-05-07
Although a leading cause of visual impairment and a treatable cause of blindness globally, the pattern of refractive errors in many populations is unknown. This study determined the pattern of refractive ocular conditions, reasons for spectacles renewal and the effect of correction on refractive errors in a resource-limited community. A retrospective review of case records of 1,413 consecutive patients seen in a private optometry practice, Nigeria between January 2006 and July 2007. A total number of 1,216 (86.1%) patients comprising of (486, 40%) males and (730, 60%) females with a mean age of 41.02 years SD 14.19 were analyzed. The age distribution peaked at peri-adolescent and the middle age years. The main ocular complaints were spectacles loss and discomfort (412, 33.9%), blurred near vision (399, 32.8%) and asthenopia (255, 20.9%). The mean duration of ocular symptoms before consultation was 2.05 years SD 1.92. The most common refractive errors include presbyopia (431, 35.3%), hyperopic astigmatism (240, 19.7%) and presbyopia with hyperopia (276, 22.7%). Only (59, 4.9%) had myopia. Following correction, there were reductions in magnitudes of the blind (VA<3/60) and visually impaired (VA<6/18-3/60) patients by (18, 58.1%) and (89, 81.7%) respectively. The main reasons for renewal of spectacles were broken lenses/frame/scratched lenses/lenses' falling off (47, 63.4%). Adequate correction of refractive errors reduces visual impairment and avoidable blindness and to achieve optimal control of refractive errors in the community, services should be targeted at individuals in the peri-adolescent and the middle age years.
Genotyping and inflated type I error rate in genome-wide association case/control studies
Sampson, Joshua N; Zhao, Hongyu
2009-01-01
Background One common goal of a case/control genome wide association study (GWAS) is to find SNPs associated with a disease. Traditionally, the first step in such studies is to assign a genotype to each SNP in each subject, based on a statistic summarizing fluorescence measurements. When the distributions of the summary statistics are not well separated by genotype, the act of genotype assignment can lead to more potential problems than acknowledged by the literature. Results Specifically, we show that the proportions of each called genotype need not equal the true proportions in the population, even as the number of subjects grows infinitely large. The called genotypes for two subjects need not be independent, even when their true genotypes are independent. Consequently, p-values from tests of association can be anti-conservative, even when the distributions of the summary statistic for the cases and controls are identical. To address these problems, we propose two new tests designed to reduce the inflation in the type I error rate caused by these problems. The first algorithm, logiCALL, measures call quality by fully exploring the likelihood profile of intensity measurements, and the second algorithm avoids genotyping by using a likelihood ratio statistic. Conclusion Genotyping can introduce avoidable false positives in GWAS. PMID:19236714
Advanced Water Vapor Lidar Detection System
NASA Technical Reports Server (NTRS)
Elsayed-Ali, Hani
1998-01-01
In the present water vapor lidar system, the detected signal is sent over long cables to a waveform digitizer in a CAMAC crate. This has the disadvantage of transmitting analog signals for a relatively long distance, which is subjected to pickup noise, leading to a decrease in the signal to noise ratio. Generally, errors in the measurement of water vapor with the DIAL method arise from both random and systematic sources. Systematic errors in DIAL measurements are caused by both atmospheric and instrumentation effects. The selection of the on-line alexandrite laser with a narrow linewidth, suitable intensity and high spectral purity, and its operation at the center of the water vapor lines, ensures minimum influence in the DIAL measurement that are caused by the laser spectral distribution and avoid system overloads. Random errors are caused by noise in the detected signal. Variability of the photon statistics in the lidar return signal, noise resulting from detector dark current, and noise in the background signal are the main sources of random error. This type of error can be minimized by maximizing the signal to noise ratio. The increase in the signal to noise ratio can be achieved by several ways. One way is to increase the laser pulse energy, by increasing its amplitude or the pulse repetition rate. Another way, is to use a detector system with higher quantum efficiency and lower noise, on the other hand, the selection of a narrow band optical filter that rejects most of the day background light and retains high optical efficiency is an important issue. Following acquisition of the lidar data, we minimize random errors in the DIAL measurement by averaging the data, but this will result in the reduction of the vertical and horizontal resolutions. Thus, a trade off is necessary to achieve a balance between the spatial resolution and the measurement precision. Therefore, the main goal of this research effort is to increase the signal to noise ratio by a factor of 10 over the current system, using a newly evaluated, very low noise avalanche photo diode detector and constructing a 10 MHz waveform digitizer which will replace the current CAMAC system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less
Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C
2016-05-01
A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.
Strategic marketing, part 2: the 4 P's of marketing.
Lexa, Frank James; Berlin, Jonathan
2006-04-01
Marketing and branding are critical business functions that are often ignored or misapplied in the health care sector. Radiology professionals are facing unprecedented competition, turf battles, and other pressures. One tool that can help in meeting this onslaught is to improve your marketing efforts. Some of the most expensive mistakes in marketing (medical and otherwise) are caused by not paying attention to the basics. These "rookie" errors can be avoided by a careful review of the 4 key principles of introductory marketing: product, price, placement, and promotion. This article reviews these concepts as they relate to medical marketing.
NASA Technical Reports Server (NTRS)
Baumann, Eric; Merolla, Anthony
1988-01-01
User controls number of clock pulses to prevent burnout. New digital programmable pulser circuit in three formats; freely running, counted, and single pulse. Operates at frequencies up to 5 MHz, with no special consideration given to layout of components or to terminations. Pulser based on sequential circuit with four states and binary counter with appropriate decoding logic. Number of programmable pulses increased beyond 127 by addition of another counter and decoding logic. For very large pulse counts and/or very high frequencies, use synchronous counters to avoid errors caused by propagation delays. Invaluable tool for initial verification or diagnosis of digital or digitally controlled circuity.
Wang, Wei; Chen, Xiyuan
2018-02-23
In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.
Effects of instrument imperfections on quantitative scanning transmission electron microscopy.
Krause, Florian F; Schowalter, Marco; Grieb, Tim; Müller-Caspary, Knut; Mehrtens, Thorsten; Rosenauer, Andreas
2016-02-01
Several instrumental imperfections of transmission electron microscopes are characterized and their effects on the results of quantitative scanning electron microscopy (STEM) are investigated and quantified using simulations. Methods to either avoid influences of these imperfections during acquisition or to include them in reference calculations are proposed. Particularly, distortions inflicted on the diffraction pattern by an image-aberration corrector can cause severe errors of more than 20% if not accounted for. A procedure for their measurement is proposed here. Furthermore, afterglow phenomena and nonlinear behavior of the detector itself can lead to incorrect normalization of measured intensities. Single electrons accidentally impinging on the detector are another source of error but can also be exploited for threshold-less calibration of STEM images to absolute dose, incident beam current determination and measurement of the detector sensitivity. Copyright © 2015 Elsevier B.V. All rights reserved.
Embarrassing Pronoun Case Errors [and] When Repeating It's Not Necessary To Use Past Tense.
ERIC Educational Resources Information Center
Arnold, George
2002-01-01
Discusses how to help journalism students avoid pronoun case errors. Notes that many students as well as broadcast journalism professionals make the error of using the past tense when referring to a previous expression or situation that remains current in meaning. (RS)
Ketamine-xylazine anesthesia causes hyperopic refractive shift in mice
Tkatchenko, Tatiana V.; Tkatchenko, Andrei V.
2010-01-01
Mice have increasingly been used as a model for studies of myopia. The key to successful use of mice for myopia research is the ability to obtain accurate measurements of refractive status of their eyes. In order to obtain accurate measurements of refractive errors in mice, the refraction needs to be performed along the optical axis of the eye. This represents a particular challenge, because mice are very difficult to immobilize. Recently, ketamine-xylazine anesthesia has been used to immobilize mice before measuring refractive errors, in combination with tropicamide ophthalmic solution to induce mydriasis. Although these drugs have increasingly been used while refracting mice, their effects on the refractive state of the mouse eye have not yet been investigated. Therefore, we have analyzed the effects of tropicamide eye drops and ketamine-xylazine anesthesia on refraction in P40 C57BL/6J mice. We have also explored two alternative methods to immobilize mice, i.e. the use of a restraining platform and pentobarbital anesthesia. We found that tropicamide caused a very small, but statistically significant, hyperopic shift in refraction. Pentobarbital did not have any substantial effect on refractive status, whereas ketamine-xylazine caused a large and highly significant hyperopic shift in refraction. We also found that the use of a restraining platform represents good alternative for immobilization of mice prior to refraction. Thus, our data suggest that ketamine-xylazine anesthesia should be avoided in studies of refractive development in mice and underscore the importance of providing appropriate experimental conditions when measuring refractive errors in mice. PMID:20813132
Ferrera-Tourenc, V; Lassale, B; Chiaroni, J; Dettori, I
2015-06-01
This study describes patient identification errors leading to transfusional near-misses in blood issued by the Alps Mediterranean French Blood Establishment (EFSAM) to Marseille Public Hospitals (APHM) over an 18-month period. The EFSAM consolidates 14 blood banks in southeast France. It supplies 149 hospitals and maintains a centralized database on ABO types used at all area hospitals. As an added precaution against incompatible transfusion, the APHM requires ABO testing at each admission regardless of whether the patient has an ABO record. The study goal was to determine if admission testing was warranted. Discrepancies between ABO type determined by admission testing and records in the centralized database were investigated. The root cause for each discrepancy was classified as specimen collection or patient admission error. Causes of patient admission events were further subclassified as namesake (name similarity) or impersonation (identity fraud). The incidence of ABO discrepancies was 1:2334 including a 1:3329 incidence of patient admission events. Impersonation was the main cause of identity events accounting for 90.3% of cases. The APHM's ABO control policy prevented 19 incompatible transfusions. In relation to the 48,593 packed red cell units transfused, this would have corresponded to a risk of 1:2526. Collecting and storing ABO typing results in a centralized database is an essential public health tool. It allows crosschecking of current test results with past records and avoids redundant testing. However, as patient identification remains unreliable, ABO typing at each admission is still warranted to prevent transfusion errors. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Chiang, Kai-Wei; Duong, Thanh Trung; Liao, Jhen-Kai
2013-01-01
The integration of an Inertial Navigation System (INS) and the Global Positioning System (GPS) is common in mobile mapping and navigation applications to seamlessly determine the position, velocity, and orientation of the mobile platform. In most INS/GPS integrated architectures, the GPS is considered to be an accurate reference with which to correct for the systematic errors of the inertial sensors, which are composed of biases, scale factors and drift. However, the GPS receiver may produce abnormal pseudo-range errors mainly caused by ionospheric delay, tropospheric delay and the multipath effect. These errors degrade the overall position accuracy of an integrated system that uses conventional INS/GPS integration strategies such as loosely coupled (LC) and tightly coupled (TC) schemes. Conventional tightly coupled INS/GPS integration schemes apply the Klobuchar model and the Hopfield model to reduce pseudo-range delays caused by ionospheric delay and tropospheric delay, respectively, but do not address the multipath problem. However, the multipath effect (from reflected GPS signals) affects the position error far more significantly in a consumer-grade GPS receiver than in an expensive, geodetic-grade GPS receiver. To avoid this problem, a new integrated INS/GPS architecture is proposed. The proposed method is described and applied in a real-time integrated system with two integration strategies, namely, loosely coupled and tightly coupled schemes, respectively. To verify the effectiveness of the proposed method, field tests with various scenarios are conducted and the results are compared with a reliable reference system. PMID:23955434
A Transient Dopamine Signal Represents Avoidance Value and Causally Influences the Demand to Avoid
Pultorak, Katherine J.; Schelp, Scott A.; Isaacs, Dominic P.; Krzystyniak, Gregory
2018-01-01
Abstract While an extensive literature supports the notion that mesocorticolimbic dopamine plays a role in negative reinforcement, recent evidence suggests that dopamine exclusively encodes the value of positive reinforcement. In the present study, we employed a behavioral economics approach to investigate whether dopamine plays a role in the valuation of negative reinforcement. Using rats as subjects, we first applied fast-scan cyclic voltammetry (FSCV) to determine that dopamine concentration decreases with the number of lever presses required to avoid electrical footshock (i.e., the economic price of avoidance). Analysis of the rate of decay of avoidance demand curves, which depict an inverse relationship between avoidance and increasing price, allows for inference of the worth an animal places on avoidance outcomes. Rapidly decaying demand curves indicate increased price sensitivity, or low worth placed on avoidance outcomes, while slow rates of decay indicate reduced price sensitivity, or greater worth placed on avoidance outcomes. We therefore used optogenetics to assess how inducing dopamine release causally modifies the demand to avoid electrical footshock in an economic setting. Increasing release at an avoidance predictive cue made animals more sensitive to price, consistent with a negative reward prediction error (i.e., the animal perceives they received a worse outcome than expected). Increasing release at avoidance made animals less sensitive to price, consistent with a positive reward prediction error (i.e., the animal perceives they received a better outcome than expected). These data demonstrate that transient dopamine release events represent the value of avoidance outcomes and can predictably modify the demand to avoid. PMID:29766047
Franceschi, Marilisa; Scarcelli, Carlo; Niro, Valeria; Seripa, Davide; Pazienza, Anna Maria; Pepe, Giovanni; Colusso, Anna Maria; Pacilli, Luigi; Pilotto, Alberto
2008-01-01
Drug use increases with advancing age, and in older patients it is associated with an increase in adverse drug reactions (ADRs). ADRs are a primary cause of morbidity and mortality worldwide. To evaluate the prevalence, clinical characteristics and avoidability of ADR-related hospital admissions in elderly patients. From November 2004 to December 2005, all patients aged >or=65 years consecutively admitted to the Geriatric Unit of the Casa Sollievo della Sofferenza Hospital, San Giovanni Rotondo in Italy, were evaluated for enrolment in the study. ADRs were defined according to the WHO Adverse Reaction Terminology system. Drugs were classified according to Anatomical Therapeutic Chemical classification system. The Naranjo algorithm was used to evaluate the relationship between drug use and the ADR (definite, probable, possible or doubtful) and Hallas criteria were used to evaluate the avoidability of the ADR (definitely avoidable, possibly avoidable or unavoidable). All cases of a suspected ADR were discussed by a team trained in drug safety, including three geriatricians, one clinical pharmacologist and one pharmacist. Only cases of an ADR with an agreement >or=80% were included. Of the 1756 patients observed, 102 (5.8%, 42 males, 60 females, mean age 76.5 +/- 7.4 years, range 65-93 years) showed certain (6.8%) or probable (91.2%) ADR-related hospitalization. Gastrointestinal disorders (48 patients, 47.1%); platelet, bleeding and clotting disorders (20 patients, 19.6%); and cardiovascular disorders (13 patients, 12.7%) were the most frequent ADRs. NSAIDs (23.5%), oral anticoagulants (20.6%), low-dose aspirin (acetylsalicylic acid) [13.7%] and digoxin (12.7%) were the drugs most frequently involved in ADRs. Of the ADRs, 45.1% were defined as definitely avoidable, 31.4% as possibly avoidable, 18.6% as unavoidable and 4.9% as unclassifiable. Of 78 patients with definitely or possibly avoidable ADRs, 17 patients (21.8%) had received an inappropriate prescription, 29 patients (37.2%) had not received a prescription for an effective gastroprotective drug concomitantly with NSAID or low-dose aspirin treatment and 32 patients (41%) were not monitored during drug treatment. In the elderly, almost 6% of hospitalizations are ADR related. Most of these ADRs are potentially avoidable. Strategies that reduce inappropriate prescriptions and monitoring errors, as well as improving active prevention of ADRs, are needed in elderly subjects.
Guidance for Avoiding Incomplete Premanufacture Notices or Bona Fides in the New Chemicals Program
This page contains documents to help you avoid having an incomplete Premanufacture notice or Bona Fide . The documents go over the chemical identity requirements and common errors that result in incompletes.
Avoiding Substantive Errors in Individualized Education Program Development
ERIC Educational Resources Information Center
Yell, Mitchell L.; Katsiyannis, Antonis; Ennis, Robin Parks; Losinski, Mickey; Christle, Christine A.
2016-01-01
The purpose of this article is to discuss major substantive errors that school personnel may make when developing students' Individualized Education Programs (IEPs). School IEP team members need to understand the importance of the procedural and substantive requirements of the IEP, have an awareness of the five serious substantive errors that IEP…
Fail Better: Toward a Taxonomy of E-Learning Error
ERIC Educational Resources Information Center
Priem, Jason
2010-01-01
The study of student error, important across many fields of educational research, has begun to attract interest in the field of e-learning, particularly in relation to usability. However, it remains unclear when errors should be avoided (as usability failures) or embraced (as learning opportunities). Many domains have benefited from taxonomies of…
Twenty Common Testing Mistakes for EFL Teachers to Avoid
ERIC Educational Resources Information Center
Henning, Grant
2012-01-01
To some extent, good testing procedure, like good language use, can be achieved through avoidance of errors. Almost any language-instruction program requires the preparation and administration of tests, and it is only to the extent that certain common testing mistakes have been avoided that such tests can be said to be worthwhile selection,…
Survey of blindness and low vision in Egbedore, South-Western Nigeria.
Kolawole, O U; Ashaye, A O; Adeoti, C O; Mahmoud, A O
2010-01-01
Developing efficient and cost-effective eye care programmes for communities in Nigeria has been hampered by inadequate and inaccurate data on blindness and low vision. To determine the prevalence and causes of blindness and low vision among adults 50 years and older in South-Western Nigeria in order to develop viable eye care programme for the community. Twenty clusters of 60 subjects of age 50 years and older were selected by systematic random cluster sampling. Information was collected and ocular examinations were conducted on each consenting subject. Data were recorded in specially designed questionnaire and analysed using descriptive statistical methods. Out of the 1200 subjects enrolled for the study, 1183(98.6%) were interviewed and examined. Seventy five (6.3%)) of the 1183 subjects were bilaterally blind and 223(18.9%) had bilateral low vision according to WHO definition of blindness and low vision. Blindness was about 1.6 times commoner in men than women. Cataract, glaucoma and posterior segment disorders were major causes of bilateral blindness. Bilateral low vision was mainly due to cataract, refractive errors and posterior segment disorders. The prevalence of blindness and low vision in this study population was high. The main causes are avoidable. Elimination of avoidable blindness and low vision calls for attention and commitment from government and eye care workers in South Western Nigeria.
Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta
2016-10-01
Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015).
Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K.
2001-05-01
Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.
Global Vision Impairment and Blindness Due to Uncorrected Refractive Error, 1990-2010.
Naidoo, Kovin S; Leasher, Janet; Bourne, Rupert R; Flaxman, Seth R; Jonas, Jost B; Keeffe, Jill; Limburg, Hans; Pesudovs, Konrad; Price, Holly; White, Richard A; Wong, Tien Y; Taylor, Hugh R; Resnikoff, Serge
2016-03-01
The purpose of this systematic review was to estimate worldwide the number of people with moderate and severe visual impairment (MSVI; presenting visual acuity <6/18, ≥3/60) or blindness (presenting visual acuity <3/60) due to uncorrected refractive error (URE), to estimate trends in prevalence from 1990 to 2010, and to analyze regional differences. The review focuses on uncorrected refractive error which is now the most common cause of avoidable visual impairment globally. : The systematic review of 14,908 relevant manuscripts from 1990 to 2010 using Medline, Embase, and WHOLIS yielded 243 high-quality, population-based cross-sectional studies which informed a meta-analysis of trends by region. The results showed that in 2010, 6.8 million (95% confidence interval [CI]: 4.7-8.8 million) people were blind (7.9% increase from 1990) and 101.2 million (95% CI: 87.88-125.5 million) vision impaired due to URE (15% increase since 1990), while the global population increased by 30% (1990-2010). The all-age age-standardized prevalence of URE blindness decreased 33% from 0.2% (95% CI: 0.1-0.2%) in 1990 to 0.1% (95% CI: 0.1-0.1%) in 2010, whereas the prevalence of URE MSVI decreased 25% from 2.1% (95% CI: 1.6-2.4%) in 1990 to 1.5% (95% CI: 1.3-1.9%) in 2010. In 2010, URE contributed 20.9% (95% CI: 15.2-25.9%) of all blindness and 52.9% (95% CI: 47.2-57.3%) of all MSVI worldwide. The contribution of URE to all MSVI ranged from 44.2 to 48.1% in all regions except in South Asia which was at 65.4% (95% CI: 62-72%). : We conclude that in 2010, uncorrected refractive error continues as the leading cause of vision impairment and the second leading cause of blindness worldwide, affecting a total of 108 million people or 1 in 90 persons.
Feature Migration in Time: Reflection of Selective Attention on Speech Errors
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.
2012-01-01
This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…
Error Patterns in Research Papers by Pacific Rim Students.
ERIC Educational Resources Information Center
Crowe, Chris
By looking for patterns of errors in the research papers of Asian students, educators can uncover pedagogical strategies to help students avoid repeating such errors. While a good deal of research has identified a number of sentence-level problems which are typical of Asian students writing in English, little attempt has been made to consider the…
Handling Errors as They Arise in Whole-Class Interactions
ERIC Educational Resources Information Center
Ingram, Jenni; Pitt, Andrea; Baldry, Fay
2015-01-01
There has been a long history of research into errors and their role in the teaching and learning of mathematics. This research has led to a change to pedagogical recommendations from avoiding errors to explicitly using them in lessons. In this study, 22 mathematics lessons were video-recorded and transcribed. A conversation analytic (CA) approach…
The evolution of Crew Resource Management training in commercial aviation
NASA Technical Reports Server (NTRS)
Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.
1999-01-01
In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.
Self-regulation of driving and its relationship to driving ability among older adults.
Baldock, M R J; Mathias, J L; McLean, A J; Berndt, A
2006-09-01
Although it is known that older drivers limit their driving, it is not known whether this self-regulation is related to actual driving ability. A sample of 104 older drivers, aged between 60 and 92, completed a questionnaire about driving habits and attitudes. Ninety of these drivers also completed a structured on-road driving test. A measure of self-regulation was derived from drivers' self-reported avoidance of difficult driving situations. The on-road driving test involved a standard assessment used to determine fitness to drive. Driving test scores for the study were based on the number of errors committed in the driving tests, with weightings given according to the seriousness of the errors. The most commonly avoided difficult driving situations, according to responses on the questionnaire, were parallel parking and driving at night in the rain, while the least avoided situation was driving alone. Poorer performance on the driving test was not related to overall avoidance of difficult driving situations. Stronger relationships were found between driving ability and avoidance of specific difficult driving situations. These specific driving situations were the ones in which the drivers had low confidence and that the drivers were most able to avoid if they wished to.
Interplanetary Trajectories, Encke Method (ITEM)
NASA Technical Reports Server (NTRS)
Whitlock, F. H.; Wolfe, H.; Lefton, L.; Levine, N.
1972-01-01
Modified program has been developed using improved variation of Encke method which avoids accumulation of round-off errors and avoids numerical ambiguities arising from near-circular orbits of low inclination. Variety of interplanetary trajectory problems can be computed with maximum accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
Three-parameter AVO crossplotting in anisotropic media
Hao, Chen; Castagna, J.P.; Brown, R.L.; Ramos, A.C.B.
2001-01-01
Amplitude versus offset (AVO) interpretation can be facilitated by crossplotting AVO intercept (A), gradient (B), and curvature (C) terms. However, anisotropy, which exists in the real world, usually complicates AVO analysis. Recognizing anisotropic behavior on AVO crossplots can help avoid AVO interpretation errors. Using a modification to a three-term (A, B, and C) approximation to the exact anisotropic reflection coefficients for transversely isotropic media, we find that anisotropy has a nonlinear effect on an A versus C crossplot yet causes slope changes and differing intercepts on A versus B or C crossplots. Empirical corrections that result in more accurate crossplot interpretation are introduced for specific circumstances.
Analyzing a Mid-Air Collision Over the Hudson River
NASA Technical Reports Server (NTRS)
Brown, Sean; Holloway, C. Michael
2012-01-01
On August 8, 2009, a private airplane collided with a sightseeing helicopter over the Hudson River near Hoboken, New Jersey. All three people aboard the airplane, the pilot and two passengers, and all six people aboard the helicopter, the pilot and five passengers, were killed. The National Transportation Safety Board report on the accident identified inherent limitations of the see-and-avoid concept, inadequate regulations, and errors by the pilots and an air traffic controller as causing or contributing to the accident. This paper presents the results of analyzing the accident using the Systems-Theoretic Accident Model and Processes (STAMP) approach to determining accident causation.
Dissociation of item and source memory in rhesus monkeys.
Basile, Benjamin M; Hampton, Robert R
2017-09-01
Source memory, or memory for the context in which a memory was formed, is a defining characteristic of human episodic memory and source memory errors are a debilitating symptom of memory dysfunction. Evidence for source memory in nonhuman primates is sparse despite considerable evidence for other types of sophisticated memory and the practical need for good models of episodic memory in nonhuman primates. A previous study showed that rhesus monkeys confused the identity of a monkey they saw with a monkey they heard, but only after an extended memory delay. This suggests that they initially remembered the source - visual or auditory - of the information but forgot the source as time passed. Here, we present a monkey model of source memory that is based on this previous study. In each trial, monkeys studied two images, one that they simply viewed and touched and the other that they classified as a bird, fish, flower, or person. In a subsequent memory test, they were required to select the image from one source but avoid the other. With training, monkeys learned to suppress responding to images from the to-be-avoided source. After longer memory intervals, monkeys continued to show reliable item memory, discriminating studied images from distractors, but made many source memory errors. Monkeys discriminated source based on study method, not study order, providing preliminary evidence that our manipulation of retention interval caused errors due to source forgetting instead of source confusion. Finally, some monkeys learned to select remembered images from either source on cue, showing that they did indeed remember both items and both sources. This paradigm potentially provides a new model to study a critical aspect of episodic memory in nonhuman primates. Copyright © 2017 Elsevier B.V. All rights reserved.
Unmanned aircraft system sense and avoid integrity and continuity
NASA Astrophysics Data System (ADS)
Jamoom, Michael B.
This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.
Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie
2015-10-28
Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world. Copyright © 2015 the authors 0270-6474/15/3514491-10$15.00/0.
Screening for visual impairment: Outcome among schoolchildren in a rural area of Delhi
Rustagi, Neeti; Uppal, Yogesh; Taneja, Devender K
2012-01-01
Background: Uncorrected refractive errors are the main cause of vision impairment in school-aged children. The current study focuses on the effectiveness of school eye screening in correcting refractive errors. Objectives: 1. To study the magnitude of visual impairment among school children. 2. To assess the compliance of students for refraction testing, procurement and use of spectacles. Materials and Methods: An intervention study was conducted in schools of the north- west district of Delhi, in the rural field practice area of a medical college. Students studying in five government schools in the field practice area were chosen as the study subjects. Results: Out of 1123 students enrolled, 1075 (95.7%) students were screened for refractive errors. Low vision (visual acuity < 20/60) in the better eye was observed in 31 (2.9%) children and blindness (visual acuity <20/200) in 10 (0.9%) children. Compliance with referral for refraction was very low as only 51 (41.5%) out of 123 students could be tested for refraction. Out of 48 students, 34 (70.8%) procured spectacles from family resources but its regular use was found among only 10 (29.4%) students. The poor compliance among students stems out of various myths and perceptions regarding use of spectacles prevalent in the community. Conclusion: Refractive error is an important cause of avoidable blindness among rural school children. Behavior change communication among rural masses by spreading awareness about eye health and conducting operational research at school and community level to involve parent's teachers associations and senior students to motivate students for use of spectacles may improve utilization of existing eye health services in rural areas. PMID:22569381
ERIC Educational Resources Information Center
El-Banna, Adel I.; Naeem, Marwa A.
2016-01-01
This research work aimed at making use of Machine Translation to help students avoid some syntactic, semantic and pragmatic common errors in translation from English into Arabic. Participants were a hundred and five freshmen who studied the "Translation Common Errors Remedial Program" prepared by the researchers. A testing kit that…
An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST
NASA Astrophysics Data System (ADS)
Hang, Xu; Jun, Zhao
2018-05-01
Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.
Avoiding common pitfalls in qualitative data collection and transcription.
Easton, K L; McComish, J F; Greenberg, R
2000-09-01
The subjective nature of qualitative research necessitates scrupulous scientific methods to ensure valid results. Although qualitative methods such as grounded theory, phenomenology, and ethnography yield rich data, consumers of research need to be able to trust the findings reported in such studies. Researchers are responsible for establishing the trustworthiness of qualitative research through a variety of ways. Specific challenges faced in the field can seriously threaten the dependability of the data. However, by minimizing potential errors that can occur when doing fieldwork, researchers can increase the trustworthiness of the study. The purpose of this article is to present three of the pitfalls that can occur in qualitative research during data collection and transcription: equipment failure, environmental hazards, and transcription errors. Specific strategies to minimize the risk for avoidable errors will be discussed.
Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S
2017-09-01
Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.
Analysis of Error Sources in STEP Astrometry
NASA Astrophysics Data System (ADS)
Liu, S. Y.; Liu, J. C.; Zhu, Z.
2017-11-01
The space telescope Search for Terrestrial Exo-Planets (STEP) employed a method of sub-pixel technology which ensures that the astrometric accuracy of telescope on the focal plane is at the order of 1 μas. This kind of astrometric precision is promising to detect earth-like planets beyond the solar system. In this paper, we analyze the influence of some key factors, including errors in the stellar proper motions, parallax, the optical center of the system, and the velocities and positions of the satellite, on the detection of exo-planets. We propose a relative angular distance method to evaluate the non-linear terms in stellar distance caused by possibly existing exo-planets. This method could avoid the direct influence of measured errors of the position and proper motion of the reference stars. Supposing that there are eight reference stars in the same field of view and a star with a planet system, we simulate their five-year observational data, and use the least square method to get the parameters of the planet orbit. Our results show that the method is robust to detect terrestrial planets based on the 1 μas precision of STEP.
Won, Jongsung; Cheng, Jack C P; Lee, Ghang
2016-03-01
Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
Cracked shaft detection on large vertical nuclear reactor coolant pump
NASA Technical Reports Server (NTRS)
Jenkins, L. S.
1985-01-01
Due to difficulty and radiation exposure associated with examination of the internals of large commercial nuclear reactor coolant pumps, it is necessary to be able to diagnose the cause of an excessive vibration problem quickly without resorting to extensive trial and error efforts. Consequently, it is necessary to make maximum use of all available data to develop a consistent theory which locates the problem area in the machine. This type of approach was taken at Three Mile Island, Unit #1, in February 1984 to identify and locate the cause of a continuously climbing vibration level of the pump shaft. The data gathered necessitated some in-depth knowledge of the pump internals to provide proper interpretation and avoid misleading conclusions. Therefore, the raw data included more than just the vibration characteristics. Pertinent details of the data gathered is shown and is necessary and sufficient to show that the cause of the observed vibration problem could logically only be a cracked pump shaft in the shaft overhang below the pump bearing.
Human dignity and the future of the voluntary active euthanasia debate in South Africa.
Jordaan, Donrich W
2017-04-25
The issue of voluntary active euthanasia was thrust into the public policy arena by the Stransham-Ford lawsuit. The High Court legalised voluntary active euthanasia - however, ostensibly only in the specific case of Mr Stransham-Ford. The Supreme Court of Appeal overturned the High Court judgment on technical grounds, not on the merits. This means that in future the courts can be approached again to consider the legalisation of voluntary active euthanasia. As such, Stransham-Ford presents a learning opportunity for both sides of the legalisation divide. In particular, conceptual errors pertaining to human dignity were made in Stransham-Ford, and can be avoided in future. In this article, I identify these errors and propose the following three corrective principles to inform future debate on the subject: (i) human dignity is violable; (ii) human suffering violates human dignity; and (iii) the 'natural' causes of suffering due to terminal illness do not exclude the application of human dignity.
Tiwari, Ruchi; Tsapepas, Demetra S; Powell, Jaclyn T; Martin, Spencer T
2013-01-01
Healthcare organizations continue to adopt information technologies with clinical decision support (CDS) to prevent potential medication-related adverse drug events. End-users who are unfamiliar with certain high-risk patient populations are at an increased risk of unknowingly causing medication errors. The following case describes a heart transplant recipient exposed to supra-therapeutic concentrations of tacrolimus during co-administration of ritonavir as a result of vendor supplied CDS tools that omitted an interaction alert. After review of 4692 potential tacrolimus-based DDIs between 329 different drug pairs supplied by vendor CDS, the severity of 20 DDIs were downgraded and the severity of 62 were upgraded. The need for institution-specific customization of vendor-provided CDS is paramount to ensure avoidance of medication errors. Individualized care will become more important as patient populations and institutions become more specialized. In the future, vendors providing integrated CDS tools must be proactive in developing institution-specific and easily customizable CDS tools.
Tiwari, Ruchi; Tsapepas, Demetra S; Powell, Jaclyn T
2013-01-01
Healthcare organizations continue to adopt information technologies with clinical decision support (CDS) to prevent potential medication-related adverse drug events. End-users who are unfamiliar with certain high-risk patient populations are at an increased risk of unknowingly causing medication errors. The following case describes a heart transplant recipient exposed to supra-therapeutic concentrations of tacrolimus during co-administration of ritonavir as a result of vendor supplied CDS tools that omitted an interaction alert. After review of 4692 potential tacrolimus-based DDIs between 329 different drug pairs supplied by vendor CDS, the severity of 20 DDIs were downgraded and the severity of 62 were upgraded. The need for institution-specific customization of vendor-provided CDS is paramount to ensure avoidance of medication errors. Individualized care will become more important as patient populations and institutions become more specialized. In the future, vendors providing integrated CDS tools must be proactive in developing institution-specific and easily customizable CDS tools. PMID:22813760
Hsu, Nina S.; Novick, Jared M.
2016-01-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes though, initial interpretations turn out wrong. How do listeners revise misinterpretations of language input moment-by-moment, to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry, and then initiating behavioral adjustments accordingly. However, no research has investigated a cause-and-effect interplay between cognitive control engagement and overriding erroneous interpretations in real-time. Using a novel cross-task paradigm, we show that Stroop-conflict detection, which mobilizes cognitive control procedures, subsequently facilitates listeners’ incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed Stroop-incongruent versus-congruent items, listeners’ eye-movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive control engagement therefore accelerates sentence re-interpretation processes, even as linguistic input is still unfolding. PMID:26957521
Wang, Wei; Chen, Xiyuan
2018-01-01
In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm. PMID:29473912
Punched belt hole position deviation analysis of float type water level gauge
NASA Astrophysics Data System (ADS)
Mao, Chunlei; Wang, Tao; Fu, Weijie; Li, Lianhui
2018-03-01
The key parts of the float type water level gauge instrument is perforated belt, The size and tolerance requirements of its aperture is: (1) alternation of 100+0.2 and 100-0.2, (2) 200±0.1, (3) 1000±0.15, (4) 10000±0.2. The single hole position: alternation of 100+0.2 and 100-0.2; double: 200±0.1, and ensure the best hole position error avoidance tends to be one-way, that is to say: when the punched belt combined with a water wheel rotating line moving, The hole position error to single direction increase or decrease, caused the water level nail gradually and close to the edge of the hole, and then edge and final punched belt was lifted. This paper uses the laser drilling process of steel strip for data collection and analysis. It is found that this method cannot meet the tolerance requirements and the double stamping processing method with adjustable cylindrical pin is feasible.
Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.
Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin
2008-12-07
Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will lead to delivery of the high dose region at a position slightly posterior to the intended region for a supine patient. When possible, care should be taken to avoid imaging through a thick layer of fat for larger patients in US alignments or, if unavoidable, the spatial inaccuracies introduced by the artifact should be considered by the physician during the formulation of the treatment plan.
NASA Technical Reports Server (NTRS)
Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.
1994-01-01
Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.
Prevalence of refractive errors in children in India: a systematic review.
Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa
2018-04-22
Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.
[The quality of medication orders--can it be improved?].
Vaknin, Ofra; Wingart-Emerel, Efrat; Stern, Zvi
2003-07-01
Medication errors are a common cause of morbidity and mortality among patients. Medication administration in hospitals is a complicated procedure with the possibility of error at each step. Errors are most commonly found at the prescription and transcription stages, although it is known that most errors can easily be avoided through strict adherence to standardized procedure guidelines. In examination of medication errors reported in the hospital in the year 2000, we found that 38% reported to have resulted from transcription errors. In the year 2001, the hospital initiated a program designed to identify faulty process of orders in an effort to improve the quality and effectiveness of the medication administration process. As part of this program, it was decided to check and evaluate the quality of the written doctor's orders and the transcription of those orders to the nursing cadre, in various hospital units. The study was conducted using a questionnaire which checked compliance to hospital standards with regard to the medication administration process, as applied to 6 units over the course of 8 weeks. Results of the survey showed poor compliance to guidelines on the part of doctors and nurses. Only 18% of doctors' orders in the study and 37% of the nurses' transcriptions were written according to standards. The Emergency Department showed an even lower compliance with only 3% of doctors' orders and 25% of nurses' transcriptions complying to standards. As a result of this study, it was decided to initiate an intensive in-service teaching course to refresh the staff's knowledge of medication administration guidelines. In the future it is recommended that hand-written orders be replaced by computerized orders in an effort to limit the chance of error.
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.
Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan
2015-10-01
This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.
Rejman, Marek
2013-01-01
The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
Virtual design and construction of plumbing systems
NASA Astrophysics Data System (ADS)
Filho, João Bosco P. Dantas; Angelim, Bruno Maciel; Guedes, Joana Pimentel; de Castro, Marcelo Augusto Farias; Neto, José de Paula Barros
2016-12-01
Traditionally, the design coordination process is carried out by overlaying and comparing 2D drawings made by different project participants. Detecting information errors from a composite drawing is especially challenging and error prone. This procedure usually leaves many design errors undetected until construction begins, and typically lead to rework. Correcting conflict issues, which were not identified during design and coordination phase, reduces the overall productivity for everyone involved in the construction process. The identification of construction issues in the field generate Request for Information (RFIs) that is one of delays causes. The application of Virtual Design and Construction (VDC) tools to the coordination processes can bring significant value to architecture, structure, and mechanical, electrical, and plumbing (MEP) designs in terms of a reduced number of errors undetected and requests for information. This paper is focused on evaluating requests for information (RFI) associated with water/sanitary facilities of a BIM model. Thus, it is expected to add improvements of water/sanitary facility designs, as well as to assist the virtual construction team to notice and identify design problems. This is an exploratory and descriptive research. A qualitative methodology is used. This study adopts RFI's classification in six analyzed categories: correction, omission, validation of information, modification, divergence of information and verification. The results demonstrate VDC's contribution improving the plumbing system designs. Recommendations are suggested to identify and avoid these RFI types in plumbing system design process or during virtual construction.
Real-time and accurate rail wear measurement method and experimental analysis.
Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun
2014-08-01
When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.
Most Common Formal Grammatical Errors Committed by Authors
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.
2017-01-01
Empirical evidence has been provided about the importance of avoiding American Psychological Association (APA) errors in the abstract, body, reference list, and table sections of empirical research articles. Specifically, authors are significantly more likely to have their manuscripts rejected for publication if they commit numerous APA…
... can also affect your breath. Common examples of foods and beverages that may cause bad breath include onions, garlic, ... and vegetables every day. Eat less meat. Avoid foods that cause you to have bad breath. Also try to avoid alcoholic beverages, which often cause bad breath. Avoid using tobacco ...
Aging and the intrusion superiority effect in visuo-spatial working memory.
Cornoldi, Cesare; Bassani, Chiara; Berto, Rita; Mammarella, Nicola
2007-01-01
This study investigated the active component of visuo-spatial working memory (VSWM) in younger and older adults testing the hypotheses that elderly individuals have a poorer performance than younger ones and that errors in active VSWM tasks depend, at least partially, on difficulties in avoiding intrusions (i.e., avoiding already activated information). In two experiments, participants were presented with sequences of matrices on which three positions were pointed out sequentially: their task was to process all the positions but indicate only the final position of each sequence. Results showed a poorer performance in the elderly compared to the younger group and a higher number of intrusion (errors due to activated but irrelevant positions) rather than invention (errors consisting of pointing out a position never indicated by the experiementer) errors. The number of errors increased when a concurrent task was introduced (Experiment 1) and it was affected by different patterns of matrices (Experiment 2). In general, results show that elderly people have an impaired VSWM and produce a large number of errors due to inhibition failures. However, both the younger and the older adults' visuo-spatial working memory was affected by the presence of activated irrelevant information, the reduction of the available resources, and task constraints.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
A Review of Models and Procedures for Synthetic Validation for Entry-Level Army Jobs
1988-12-01
demonstrated that training judges to avoid various types of rating errors is successful. Ivancevich (1979), for example, found that extensive discussion of...judges. Personnel Psychology, 39, 337-344. Ivancevich , J. M. (1979). Longitudinal study of the effects of rater training on psychometric error in
Ten common errors beginning substance abuse workers make in group treatment.
Greif, G L
1996-01-01
Beginning therapists sometimes make mistakes when working with substance abusers in groups. This article discusses ten common errors that the author has observed. Five center on the therapist's approach and five center on the nuts and bolts of group leadership. Suggestions are offered for how to avoid them.
Marketing Across Cultures: Learning from U.S. Corporate Blunders.
ERIC Educational Resources Information Center
Raffield, Barney T., III
Errors in judgment made in international marketing as a result of American corporate provincialism or cultural ignorance are chronicled, and ways to avoid similar problems in the future are discussed. The marketing blunders, which have been both expensive and embarrassing, include errors in language use and mistaken assumptions about cultural…
Report on Automated Semantic Analysis of Scientific and Engineering Codes
NASA Technical Reports Server (NTRS)
Stewart. Maark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.
Neural Mechanisms for Adaptive Learned Avoidance of Mental Effort.
Mitsuto Nagase, Asako; Onoda, Keiichi; Clifford Foo, Jerome; Haji, Tomoki; Akaishi, Rei; Yamaguchi, Shuhei; Sakai, Katsuyuki; Morita, Kenji
2018-02-05
Humans tend to avoid mental effort. Previous studies have demonstrated this tendency using various demand-selection tasks; participants generally avoid options associated with higher cognitive demand. However, it remains unclear whether humans avoid mental effort adaptively in uncertain and non-stationary environments, and if so, what neural mechanisms underlie this learned avoidance and whether they remain the same irrespective of cognitive-demand types. We addressed these issues by developing novel demand-selection tasks where associations between choice options and cognitive-demand levels change over time, with two variations using mental arithmetic and spatial reasoning problems (29:4 and 18:2 males:females). Most participants showed avoidance, and their choices depended on the demand experienced on multiple preceding trials. We assumed that participants updated the expected cost of mental effort through experience, and fitted their choices by reinforcement learning models, comparing several possibilities. Model-based fMRI analyses revealed that activity in the dorsomedial and lateral frontal cortices was positively correlated with the trial-by-trial expected cost for the chosen option commonly across the different types of cognitive demand, and also revealed a trend of negative correlation in the ventromedial prefrontal cortex. We further identified correlates of cost-prediction-error at time of problem-presentation or answering the problem, the latter of which partially overlapped with or were proximal to the correlates of expected cost at time of choice-cue in the dorsomedial frontal cortex. These results suggest that humans adaptively learn to avoid mental effort, having neural mechanisms to represent expected cost and cost-prediction-error, and the same mechanisms operate for various types of cognitive demand. SIGNIFICANCE STATEMENT In daily life, humans encounter various cognitive demands, and tend to avoid high-demand options. However, it remains unclear whether humans avoid mental effort adaptively under dynamically changing environments, and if so, what are the underlying neural mechanisms and whether they operate irrespective of cognitive-demand types. To address these issues, we developed novel tasks, where participants could learn to avoid high-demand options under uncertain and non-stationary environments. Through model-based fMRI analyses, we found regions whose activity was correlated with the expected mental effort cost, or cost-prediction-error, regardless of demand-type, with overlap or adjacence in the dorsomedial frontal cortex. This finding contributes to clarifying the mechanisms for cognitive-demand avoidance, and provides empirical building blocks for the emerging computational theory of mental effort. Copyright © 2018 the authors.
Lindström, Björn R; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas
2013-01-01
Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful.
Lindström, Björn R.; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas
2013-01-01
Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful. PMID:23840356
A Flexible Latent Class Approach to Estimating Test-Score Reliability
ERIC Educational Resources Information Center
van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…
Communication Vagueness in the Literature Review Section of Journal Article Submissions
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.
2018-01-01
Evidence has been provided about the importance of avoiding American Psychological Association (APA) errors in the abstract, body, reference list, and table sections of empirical research articles. Specifically, authors are significantly more likely to have their manuscripts rejected for publication if they fail to avoid APA violations--and, thus,…
Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.
2012-01-01
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107
Limitations of the paraxial Debye approximation.
Sheppard, Colin J R
2013-04-01
In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.
Ziemann, Christian; Stille, Maik; Cremers, Florian; Buzug, Thorsten M; Rades, Dirk
2018-04-17
Metal artifacts caused by high-density implants lead to incorrectly reconstructed Hounsfield units in computed tomography images. This can result in a loss of accuracy in dose calculation in radiation therapy. This study investigates the potential of the metal artifact reduction algorithms, Augmented Likelihood Image Reconstruction and linear interpolation, in improving dose calculation in the presence of metal artifacts. In order to simulate a pelvis with a double-sided total endoprosthesis, a polymethylmethacrylate phantom was equipped with two steel bars. Artifacts were reduced by applying the Augmented Likelihood Image Reconstruction, a linear interpolation, and a manual correction approach. Using the treatment planning system Eclipse™, identical planning target volumes for an idealized prostate as well as structures for bladder and rectum were defined in corrected and noncorrected images. Volumetric modulated arc therapy plans have been created with double arc rotations with and without avoidance sectors that mask out the prosthesis. The irradiation plans were analyzed for variations in the dose distribution and their homogeneity. Dosimetric measurements were performed using isocentric positioned ionization chambers. Irradiation plans based on images containing artifacts lead to a dose error in the isocenter of up to 8.4%. Corrections with the Augmented Likelihood Image Reconstruction reduce this dose error to 2.7%, corrections with linear interpolation to 3.2%, and manual artifact correction to 4.1%. When applying artifact correction, the dose homogeneity was slightly improved for all investigated methods. Furthermore, the calculated mean doses are higher for rectum and bladder if avoidance sectors are applied. Streaking artifacts cause an imprecise dose calculation within irradiation plans. Using a metal artifact correction algorithm, the planning accuracy can be significantly improved. Best results were accomplished using the Augmented Likelihood Image Reconstruction algorithm. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Use of failure mode effect analysis (FMEA) to improve medication management process.
Jain, Khushboo
2017-03-13
Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.
Recursive Construction of Noiseless Subsystem for Qudits
NASA Astrophysics Data System (ADS)
Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing
2014-03-01
When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.
A logic programming approach to medical errors in imaging.
Rodrigues, Susana; Brandão, Paulo; Nelas, Luís; Neves, José; Alves, Victor
2011-09-01
In 2000, the Institute of Medicine reported disturbing numbers on the scope it covers and the impact of medical error in the process of health delivery. Nevertheless, a solution to this problem may lie on the adoption of adverse event reporting and learning systems that can help to identify hazards and risks. It is crucial to apply models to identify the adverse events root causes, enhance the sharing of knowledge and experience. The efficiency of the efforts to improve patient safety has been frustratingly slow. Some of this insufficiency of progress may be assigned to the lack of systems that take into account the characteristic of the information about the real world. In our daily lives, we formulate most of our decisions normally based on incomplete, uncertain and even forbidden or contradictory information. One's knowledge is less based on exact facts and more on hypothesis, perceptions or indications. From the data collected on our adverse event treatment and learning system on medical imaging, and through the use of Extended Logic Programming to knowledge representation and reasoning, and the exploitation of new methodologies for problem solving, namely those based on the perception of what is an agent and/or multi-agent systems, we intend to generate reports that identify the most relevant causes of error and define improvement strategies, concluding about the impact, place of occurrence, form or type of event recorded in the healthcare institutions. The Eindhoven Classification Model was extended and adapted to the medical imaging field and used to classify adverse events root causes. Extended Logic Programming was used for knowledge representation with defective information, allowing for the modelling of the universe of discourse in terms of data and knowledge default. A systematization of the evolution of the body of knowledge about Quality of Information embedded in the Root Cause Analysis was accomplished. An adverse event reporting and learning system was developed based on the presented approach to medical errors in imaging. This system was deployed in two Portuguese healthcare institutions, with an appealing outcome. The system enabled to verify that the majority of occurrences were concentrated in a few events that could be avoided. The developed system allowed automatic knowledge extraction, enabling report generation with strategies for the improvement of quality-of-care. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Chong, Cheefoong; Dai, Shuan
2013-08-02
To provide information and comparison pertaining to visual impairment of Maori children with other children in New Zealand in particular: prevalence of blindness, causes of visual impairment, and avoidable causes of visual impairment. Retrospective data collection utilising the WHO/PBL eye examination record for children with blindness and low vision at Blind and Low Vision Education Network New Zealand (BLENNZ), Homai. Individuals not of Maori ethnicity or over the age of 16 were excluded from the study. 106 blind and 64 low-vision Maori children were studied. The main cause of blindness in Maori children is cortical visual impairment. Twenty-eight percent of causes of blindness in this population are potentially avoidable with non-accidental injury as the main cause. The prevalence of blindness and low vision in children amounts to 0.05% and 0.03%, respectively. The prevalence and causes of childhood blindness are comparable to the other ethnic groups in New Zealand. The main difference lies in avoidable causes of blindness, which appeared to be much higher in the Maori population. The leading cause of avoidable blindness in Maori children is caused by non-accidental injuries.
Modeling Error Distributions of Growth Curve Models through Bayesian Methods
ERIC Educational Resources Information Center
Zhang, Zhiyong
2016-01-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…
Phantom feet on digital radionuclide images and other scary computer tales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, J.E.; Dworkin, H.J.; Dees, S.M.
1989-09-01
Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.
Flynn, Fran; Evanish, Julie Q; Fernald, Josephine M; Hutchinson, Dawn E; Lefaiver, Cheryl
2016-08-01
Because of the high frequency of interruptions during medication administration, the effectiveness of strategies to limit interruptions during medication administration has been evaluated in numerous quality improvement initiatives in an effort to reduce medication administration errors. To evaluate the effectiveness of evidence-based strategies to limit interruptions during scheduled, peak medication administration times in 3 progressive cardiac care units (PCCUs). A secondary aim of the project was to evaluate the impact of limiting interruptions on medication errors. The percentages of interruptions and medication errors before and after implementation of evidence-based strategies to limit interruptions were measured by using direct observations of nurses on 2 PCCUs. Nurses in a third PCCU served as a comparison group. Interruptions (P < .001) and medication errors (P = .02) decreased significantly in 1 PCCU after implementation of evidence-based strategies to limit interruptions. Avoidable interruptions decreased 83% in PCCU1 and 53% in PCCU2 after implementation of the evidence-based strategies. Implementation of evidence-based strategies to limit interruptions in PCCUs decreases avoidable interruptions and promotes patient safety. ©2016 American Association of Critical-Care Nurses.
Diagnostic decision-making and strategies to improve diagnosis.
Thammasitboon, Satid; Cutrer, William B
2013-10-01
A significant portion of diagnostic errors arises through cognitive errors resulting from inadequate knowledge, faulty data gathering, and/or faulty verification. Experts estimate that 75% of diagnostic failures can be attributed to clinician diagnostic thinking failure. The cognitive processes that underlie diagnostic thinking of clinicians are complex and intriguing, and it is imperative that clinicians acquire explicit appreciation and application of different cognitive approaches to make decisions better. A dual-process model that unifies many theories of decision-making has emerged as a promising template for understanding how clinicians think and judge efficiently in a diagnostic reasoning process. The identification and implementation of strategies for decreasing or preventing such diagnostic errors has become a growing area of interest and research. Suggested strategies to decrease diagnostic error incidence include increasing clinician's clinical expertise and avoiding inherent cognitive errors to make decisions better. Implementing Interventions focused solely on avoiding errors may work effectively for patient safety issues such as medication errors. Addressing cognitive errors, however, requires equal effort on expanding the individual clinician's expertise. Providing cognitive support to clinicians for robust diagnostic decision-making serves as the final strategic target for decreasing diagnostic errors. Clinical guidelines and algorithms offer another method for streamlining decision-making and decreasing likelihood of cognitive diagnostic errors. Addressing cognitive processing errors is undeniably the most challenging task in reducing diagnostic errors. While many suggested approaches exist, they are mostly based on theories and sciences in cognitive psychology, decision-making, and education. The proposed interventions are primarily suggestions and very few of them have been tested in the actual practice settings. Collaborative research effort is required to effectively address cognitive processing errors. Researchers in various areas, including patient safety/quality improvement, decision-making, and problem solving, must work together to make medical diagnosis more reliable. © 2013 Mosby, Inc. All rights reserved.
[Discussion on six errors of formulas corresponding to syndromes in using the classic formulas].
Bao, Yan-ju; Hua, Bao-jin
2012-12-01
The theory of formulas corresponding to syndromes is one of the characteristics of Treatise on Cold Damage and Miscellaneous Diseases (Shanghan Zabing Lun) and one of the main principles in applying classic prescriptions. It is important to take effect by following the principle of formulas corresponding to syndromes. However, some medical practitioners always feel that the actual clinical effect is far less than expected. Six errors in the use of classic prescriptions as well as the theory of formulas corresponding to syndromes are the most important causes to be considered, i.e. paying attention only to the local syndromes while neglecting the whole, paying attention only to formulas corresponding to syndromes while neglecting the pathogenesis, paying attention only to syndromes while neglecting the pulse diagnosis, paying attention only to unilateral prescription but neglecting the combined prescriptions, paying attention only to classic prescriptions while neglecting the modern formulas, and paying attention only to the formulas but neglecting the drug dosage. Therefore, not only the patients' clinical syndromes, but also the combination of main syndrome and pathogenesis simultaneously is necessary in the clinical applications of classic prescriptions and the theory of prescription corresponding to syndrome. In addition, comprehensive syndrome differentiation, modern formulas, current prescriptions, combined prescriptions, and drug dosage all contribute to avoid clinical errors and improve clinical effects.
Albert, X; Bayo, A; Alfonso, J L; Cortina, P; Corella, D
1996-01-01
OBJECTIVES: To measure variations in the Holland and Charlton classifications of avoidable death causes and to estimate the effect of the Spanish national health system on avoidable mortality. DESIGN: Mortality in the Valencian Community was assessed between 1975 and 1990. The classifications of Holland and Charlton, used to assess avoidable causes of death, were compared. Holland's classification was then used to divide avoidable mortality into two groups--medical care indicators (MCI), which show the effectiveness of health care, and national health policy indicators (NHPI), which show the status of primary prevention. Comparisons were made with rates, group rates, and population rates. Trends and indices were also studied. SETTING: Valencia, Spain, 1975-90. RESULTS: During the study period, avoidable morality (only assessed by MCI) fell 63%, whereas the remainder of the mortality (non-MCI causes, that is all the non-avoidable causes together with the NHPI group) fell by 17%. If it is assumed that the mortality due to non-MCI causes indicates the overall effect of the environmental, social, nutritional, and genetic influences, then the difference between this and the MCI group would take us nearer the actual effect of the intervention of the health system. CONCLUSIONS: It is concluded that in this community, the health system has been responsible for approximately 47% of the total reduction in mortality from avoidable causes in the period studied. PMID:8935465
Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2014-01-01
In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.
Prevalence of visual impairment and outcomes of cataract surgery in Chaonan, South China
Zhang, Xiujuan; Li, Emmy Y.; Leung, Christopher Kai-Shun; Musch, David C.; Tang, Xin; Zheng, Chongren; He, Mingguang; Chang, David F.
2017-01-01
Purpose To estimate the prevalence and causes of blindness and visual impairment (VI), and report the outcomes of cataract surgery in Chaonan Region, Guangdong Province, southern China Design Cross-sectional population-based survey Participants A total of 3484 participants including 1397 men (40.1%) and 2087 women (59.9%) aged ≥50 years were examined (94.2% response rate). Method A two-stage cluster sampling procedure was used to select 3700 participants aged ≥50 years from 74 clusters of Chaonan Region. Participants were examined according to the Rapid Assessment of Avoidable Blindness (RAAB) method. Blindness and visual impairment (VI) were defined by the World Health Organization criteria. Participants with visual acuity (VA) < 6/18 in either eye were examined by ophthalmologists. The primary causes of blindness and VI were reported with reference to the participant’s better eye. Main outcome measures Prevalence and main causes of blindness, severe visual impairment (SVI), VI and the outcomes of cataract surgery Results The standardized prevalence rates of blindness, SVI, and VI were 2.4% (95% confidence interval [CI], 1.9–2.9%), 1.0% (95% CI, 0.7–1.4%), and 6.4% (95% CI, 5.6%– 7.1%), respectively. The principal cause of blindness and SVI was cataract, accounting for 67.1% and 67.6% respectively, and the principal cause of VI was refractive error (46.9%). One hundred and fifty five out of 3484 (4.4%) people (211 eyes) had cataract surgery. Of the 211 eyes that had cataract surgery, 96.7% were pseudophakic. 67.2% of the 211 operated eyes had a presenting visual acuity (PVA) of 6/18 or better. Conclusions The prevalence of blindness, SVI, and VI was high among rural residents in Chaonan. Cataract remained the leading cause of avoidable blindness. Outcomes of cataract surgery performed in rural private clinics were suboptimal. Quality-control initiatives such as hands-on training program should be introduced to improve cataract surgery outcomes. PMID:28797099
The first rapid assessment of avoidable blindness (RAAB) in Thailand.
Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet
2014-01-01
The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5-0.8), 1.3% (95% CI: 1.0-1.6), 12.6% (95% CI: 10.8-14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system.
On the unity of children’s phonological error patterns: Distinguishing symptoms from the problem
Dinnsen, Daniel A.
2012-01-01
This article compares the claims of rule- and constraint-based accounts of three seemingly distinct error patterns, namely, Deaffrication, Consonant Harmony and Assibilation, in the sound system of a child with a phonological delay. It is argued that these error patterns are not separate problems, but rather are symptoms of a larger conspiracy to avoid word-initial coronal stops. The clinical implications of these findings are also considered. PMID:21787147
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Design and study of water supply system for supercritical unit boiler in thermal power station
NASA Astrophysics Data System (ADS)
Du, Zenghui
2018-04-01
In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
Learning processes underlying avoidance of negative outcomes.
Andreatta, Marta; Michelmann, Sebastian; Pauli, Paul; Hewig, Johannes
2017-04-01
Successful avoidance of a threatening event may negatively reinforce the behavior due to activation of brain structures involved in reward processing. Here, we further investigated the learning-related properties of avoidance using feedback-related negativity (FRN). The FRN is modulated by violations of an intended outcome (prediction error, PE), that is, the bigger the difference between intended and actual outcome, the larger the FRN amplitude is. Twenty-eight participants underwent an operant conditioning paradigm, in which a behavior (button press) allowed them to avoid a painful electric shock. During two learning blocks, participants could avoid an electric shock in 80% of the trials by pressing one button (avoidance button), or by not pressing another button (punishment button). After learning, participants underwent two test blocks, which were identical to the learning ones except that no shocks were delivered. Participants pressed the avoidance button more often than the punishment button. Importantly, response frequency increased throughout the learning blocks but it did not decrease during the test blocks, indicating impaired extinction and/or habit formation. In line with a PE account, FRN amplitude to negative feedback after correct responses (i.e., unexpected punishment) was significantly larger than to positive feedback (i.e., expected omission of punishment), and it increased throughout the blocks. Highly anxious individuals showed equal FRN amplitudes to negative and positive feedback, suggesting impaired discrimination. These results confirm the role of negative reinforcement in motivating behavior and learning, and reveal important differences between high and low anxious individuals in the processing of prediction errors. © 2017 Society for Psychophysiological Research.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Hsu, Nina S; Novick, Jared M
2016-04-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes, though, initial interpretations turn out to be wrong. How do listeners revise misinterpretations of language input moment by moment to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry and then initiating behavioral adjustments accordingly. However, no research to date has investigated a cause-and-effect interplay between cognitive-control engagement and the overriding of erroneous interpretations in real time. Using a novel cross-task paradigm, we showed that Stroop-conflict detection, which mobilizes cognitive-control procedures, subsequently facilitates listeners' incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed incongruent Stroop items, compared with congruent Stroop items, listeners' eye movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive-control engagement therefore accelerates sentence-reinterpretation processes, even as linguistic input is still unfolding. © The Author(s) 2016.
Predictable and reliable ECG monitoring over IEEE 802.11 WLANs within a hospital.
Park, Juyoung; Kang, Kyungtae
2014-09-01
Telecardiology provides mobility for patients who require constant electrocardiogram (ECG) monitoring. However, its safety is dependent on the predictability and robustness of data delivery, which must overcome errors in the wireless channel through which the ECG data are transmitted. We report here a framework that can be used to gauge the applicability of IEEE 802.11 wireless local area network (WLAN) technology to ECG monitoring systems in terms of delay constraints and transmission reliability. For this purpose, a medical-grade WLAN architecture achieved predictable delay through the combination of a medium access control mechanism based on the point coordination function provided by IEEE 802.11 and an error control scheme based on Reed-Solomon coding and block interleaving. The size of the jitter buffer needed was determined by this architecture to avoid service dropout caused by buffer underrun, through analysis of variations in transmission delay. Finally, we assessed this architecture in terms of service latency and reliability by modeling the transmission of uncompressed two-lead electrocardiogram data from the MIT-BIH Arrhythmia Database and highlight the applicability of this wireless technology to telecardiology.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
NASA Astrophysics Data System (ADS)
Park, George; Yang, Xiang; Moin, Parviz
2017-11-01
Log-layer mismatch (LLM) refers to the erroneous shifts of the mean velocity profile in the log-law region when wall models are coupled to the LES solution at the first off-wall grid points. It is often believed that the discretization error and subgrid-scale modeling error in the highly under resolved near-wall region contaminates the first off-wall LES solution, thereby providing inaccurate input to wall models resulting in inaccurate wall shear stress. Placing the LES/wall-model interface a couple of cells away from the wall has been recommended to avoid LLM. However, its non-local nature render this method impractical for flows involving complex geometry, by incurring significant overhead in LES mesh preparation and wall-model implementation. We propose an alternative remedy for LLM which warrants the removal of LLM while utilizing the first off-wall LES data. The method is based on filtering the wall-model input either in space or in time. It is simple, easy to implement, and would be particularly well suited for unstructured-grid LES involving complex geometries. We also demonstrate that LLM is caused by excessive correlation between the wall-model input and its wall shear stress output. This research is sponsored by NASA (NNX15AU93A) and ONR (FA9550-16-1-0319).
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Koay, C L; Patel, D K; Tajunisah, I; Subrayan, V; Lansingh, V C
2015-04-01
To determine the avoidable causes of childhood blindness in Malaysia and to compare this to other middle income countries, low income countries and high income countries. Data were obtained from a school of the blind study by Patel et al. and analysed for avoidable causes of childhood blindness. Six other studies with previously published data on childhood blindness in Bangladesh, Ethiopia, Nigeria, Indonesia, China and the United Kingdom were reviewed for avoidable causes. Comparisons of data and limitations of the studies are described. Prevalence of avoidable causes of childhood blindness in Malaysia is 50.5 % of all the cases of childhood blindness, whilst in the poor income countries such as Bangladesh, Ethiopia, Nigeria and Indonesia, the prevalence was in excess of 60 %. China had a low prevalence, but this is largely due to the fact that most schools were urban, and thus did not represent the situation of the country. High income countries had the lowest prevalence of avoidable childhood blindness. In middle income countries, such as Malaysia, cataract and retinopathy of prematurity are the main causes of avoidable childhood blindness. Low income countries continue to struggle with infections such as measles and nutritional deficiencies, such as vitamin A, both of which are the main contributors to childhood blindness. In high income countries, such as the United Kingdom, these problems are almost non-existent.
Measurement errors in voice-key naming latency for Hiragana.
Yamada, Jun; Tamaoka, Katsuo
2003-12-01
This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.
GIZMO: Multi-method magneto-hydrodynamics+gravity code
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2014-10-01
GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
NASA Technical Reports Server (NTRS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
Avoidable errors in dealing with anaphylactoid reactions to iodinated contrast media.
Segal, Arthur J; Bush, William H
2011-03-01
Contrast reactions are much less common today than in the past. This is principally because of the current and predominant use of low and iso-osmolar contrast media compared with the prior use of high osmolality contrast media. As a result of the significantly diminished frequency, there are now fewer opportunities for physicians to recognize and appropriately treat such adverse reactions. In review of the literature combined with our own clinical and legal experience, 12 potential errors were identified and these are reviewed in detail so that they can be avoided by the physician-in-charge. Basic treatment considerations are presented along with a plan to systematize an approach to contrast reactions, simplify treatment options and plans, and schedule periodic drills.
Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission
NASA Technical Reports Server (NTRS)
Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.
NASA Astrophysics Data System (ADS)
Wang, Guochao; Xie, Xuedong; Yan, Shuhua
2010-10-01
Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .
A Framework for Human Microbiome Research
2012-06-14
determined that many compo- nents of data production and processing can contribute errors and artefacts. We investigated methods that avoid these errors and...protocol that ensured consistency in the high-throughput production . To maximize accuracy and consistency, protocols were evaluated primarily using a...future benefits, this resource may promote the development of novel prophylactic strategies such as the application of prebiotics and probiotics to
Thinly disguised contempt: a barrier to excellence.
Brown-Stewart, P
1987-04-01
Many elements in contemporary leadership and management convey contempt for employees. "Thinly disguised contempt," a concept introduced by Peters and Austin in A Passion For Excellence, explains many barriers to the achievement of excellence in corporations across disciplines. Health care executives and managers can learn from the errors of corporate management and avoid replicating these errors in the health care industry.
On P values and effect modification.
Mayer, Martin
2017-12-01
A crucial element of evidence-based healthcare is the sound understanding and use of statistics. As part of instilling sound statistical knowledge and practice, it seems useful to highlight instances of unsound statistical reasoning or practice, not merely in captious or vitriolic spirit, but rather, to use such error as a springboard for edification by giving tangibility to the concepts at hand and highlighting the importance of avoiding such error. This article aims to provide an instructive overview of two key statistical concepts: effect modification and P values. A recent article published in the Journal of the American College of Cardiology on side effects related to statin therapy offers a notable example of errors in understanding effect modification and P values, and although not so critical as to entirely invalidate the article, the errors still demand considerable scrutiny and correction. In doing so, this article serves as an instructive overview of the statistical concepts of effect modification and P values. Judicious handling of statistics is imperative to avoid muddying their utility. This article contributes to the body of literature aiming to improve the use of statistics, which in turn will help facilitate evidence appraisal, synthesis, translation, and application.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Rapid assessment of avoidable blindness in Papua New Guinea: a nationwide survey.
Lee, Ling; D'Esposito, Fabrizio; Garap, Jambi; Wabulembo, Geoffrey; Koim, Samuel Peter; Keys, Drew; Cama, Anaseini T; Limburg, Hans; Burnett, Anthea
2018-05-23
To estimate the prevalence and main causes of blindness and vision impairment in people aged 50 years and older in Papua New Guinea (PNG). National cross-sectional population-based survey in National Capital District (NCD), Highlands, Coastal and Islands regions. Adults aged 50 years and above were recruited from 100 randomly selected clusters. Each participant underwent monocular presenting and pinhole visual acuity (VA) assessment and lens examination. Those with pinhole VA<6/12 in either eye had a dilated fundus examination to determine the primary cause of reduced vision. Those with obvious lens opacity were interviewed on barriers to cataract surgery. A total of 4818 adults were examined. The age-adjusted and sex-adjusted prevalence of blindness (VA <3/60), severe vision impairment (SVI, VA <6/60 but ≥3/60), moderate vision impairment (MVI, VA <6/18 but ≥6/60) and early vision impairment (EVI, VA <6/12 but ≥6/18) was 5.6% (95% CI 4.9% to 6.3%), 2.9% (95% CI 2.5% to 3.4%), 10.9% (95% CI 9.9% to 11.9%) and 7.3% (95% CI 6.6% to 8.0%), respectively. The main cause of blindness, SVI and MVI was cataract, while uncorrected refractive error was the main cause of EVI. A significantly higher prevalence of blindness, SVI and MVI occurred in the Highlands compared with NCD. Across all regions, women had lower cataract surgical coverage and spectacle coverage than men. PNG has one of the highest reported prevalence of blindness globally. Cataract and uncorrected refractive error are the main causes, suggesting a need for increased accessible services with improved resources and advocacy for enhancing eye health literacy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Differential Kinematics Of Contemporary Industrial Robots
NASA Astrophysics Data System (ADS)
Szkodny, T.
2014-08-01
The paper presents a simple method of avoiding singular configurations of contemporary industrial robot manipulators of such renowned companies as ABB, Fanuc, Mitsubishi, Adept, Kawasaki, COMAU and KUKA. To determine the singular configurations of these manipulators a global form of description of the end-effector kinematics was prepared, relative to the other links. On the basis of this description , the formula for the Jacobian was defined in the end-effector coordinates. Next, a closed form of the determinant of the Jacobian was derived. From the formula, singular configurations, where the determinant's value equals zero, were determined. Additionally, geometric interpretations of these configurations were given and they were illustrated. For the exemplary manipulator, small corrections of joint variables preventing the reduction of the Jacobian order were suggested. An analysis of positional errors, caused by these corrections, was presented
Trends in restorative composites research: what is in the future?
Maas, Mariel Soeiro; Alania, Yvette; Natale, Livia Camargo; Rodrigues, Marcela Charantola; Watts, David Christopher; Braga, Roberto Ruggiero
2017-08-28
Clinical trials have identified secondary caries and bulk fracture as the main causes for composite restoration failure. As a measure to avoid frequent reinterventions for restoration replacement, composites with some sort of defense mechanism against biofilm formation and demineralization, as well as materials with lower susceptibility to crack propagation are necessary. Also, the restorative procedure with composites are very time-consuming and technically demanding, particularly concerning the application of the adhesive system. Therefore, together with bulk-fill composites, self-adhesive restorative composites could reduce operator error and chairside time. This literature review describes the current stage of development of remineralizing, antibacterial and self-healing composites. Also, an overview of the research on fiber-reinforced composites and self-adhesive composites, both introduced for clinical use in recent years, is presented.
[Classifications in forensic medicine and their logical basis].
Kovalev, A V; Shmarov, L A; Ten'kov, A A
2014-01-01
The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.
Human error and the search for blame
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.
The finer points of writing and refereeing scientific articles.
Bain, Barbara J; Littlewood, Tim J; Szydlo, Richard M
2016-02-01
Writing scientific papers is a skill required by all haematologists. Many also need to be able to referee papers submitted to journals. These skills are not often formally taught and as a result may not be done well. We have reviewed published evidence of errors in these processes. Such errors may be ethical, scientific or linguistic, or may result from a lack of understanding of the processes. The objective of the review is, by highlighting errors, to help writers and referees to avoid them. © 2016 John Wiley & Sons Ltd.
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
Threat interferes with response inhibition.
Hartikainen, Kaisa M; Siiskonen, Anna R; Ogawa, Keith H
2012-05-09
A potential threat, such as a spider, captures attention and engages executive functions to adjust ongoing behavior and avoid danger. We and many others have reported slowed responses to neutral targets in the context of emotional distractors. This behavioral slowing has been explained in the framework of attentional competition for limited resources with emotional stimuli prioritized. Alternatively, slowed performance could reflect the activation of avoidance/freezing-type motor behaviors associated with threat. Although the interaction of attention and emotion has been widely studied, little is known on the interaction between emotion and executive functions. We studied how threat-related stimuli (spiders) interact with executive performance and whether the interaction profile fits with a resource competition model or avoidance/freezing-type motor behaviors. Twenty-one young healthy individuals performed a Go-NoGo visual discrimination reaction time (RT) task engaging several executive functions with threat-related and emotionally neutral distractors. The threat-related distractors had no effect on the RT or the error rate in the Go trials. The NoGo error rate, reflecting failure in response inhibition, increased significantly because of threat-related distractors in contrast to neutral distractors, P less than 0.05. Thus, threat-related distractors temporarily impaired response inhibition. Threat-related distractors associated with increased commission errors and no effect on RT does not suggest engagement of avoidance/freezing-type motor behaviors. The results fit in the framework of the resource competition model. A potential threat calls for evaluation of affective significance as well as inhibition of undue emotional reactivity. We suggest that these functions tax executive resources and may render other executive functions, such as response inhibition, temporarily compromised when the demands for resources exceed availability.
Recovery from unusual attitudes: HUD vs. back-up display in a static F/A-18 simulator.
Huber, Samuel W
2006-04-01
Spatial disorientation (SD) remains one of the most important causes of fatal fighter aircraft accidents. The aim of this study was to give a recommendation for the use of the head-up display (HUD) or back-up attitude directional indicator (ADI) in a state of spatial disorientation based on the respective performance in an unusual attitude recovery task. Seven fighter pilots joining a conversion course to the F/A-18 participated in this study. Flight time will be presented as range (and mean in parentheses). Total military flight experience of the subjects was 835-1759 h (1412 h). Flight time on the F/A-18 was 41-123 h (70 h). The study was performed in a fixed base F/A-18D Weapons Tactics Trainer. We tested the recovery from 11 unusual attitudes and analyzed decision time (DT), total recovery time (TRT), and error rates for the HUD or the back-up ADI. We found no differences regarding either reaction times or error rates. For the HUD we found a DT (mean +/- SD) of 1.3 +/- 0.4 s, a TRT of 9.1 +/- 4.1 s, and an error rate of 29%. For the ADI the respective values were a DT of 1.4 +/- 0.4 s, a TRT of 8.3 +/- 3.8 s, and an error rate of 27%. Unusual attitude recoveries are performed equally well using the HUD or the back-up ADI. Switching from one instrument to the other during recovery should be avoided since it would probably result in a loss of time without benefit.
Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo
2015-01-01
To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.
Economic and health risk trade-offs of swim closures at a Lake Michigan beach
Rabinovici, Sharyl M.; Bernknopf, Richard L.; Wein, Anne M.; Coursey, Don L.; Whitman, Richard L.
2004-01-01
This paper presents a framework for analyzing the economic, health, and recreation implications of swim closures related to high fecal indicator bacteria (FIB) levels. The framework utilizes benefit transfer policy analysis to provide a practical procedure for estimating the effectiveness of recreational water quality policies. Evaluation criteria include the rates of intended and unintended management outcomes, whether the chosen protocols generate closures with positive net economic benefits to swimmers, and the number of predicted illnesses the policy is able to prevent. We demonstrate the framework through a case study of a Lake Michigan freshwater beach using existing water quality and visitor data from 1998 to 2001. We find that a typical closure causes a net economic loss among would-be swimmers totaling $1274-37 030/ day, depending on the value assumptions used. Unnecessary closures, caused by high indicator variability and a 24-h time delay between when samples are taken and the management decision can be made, occurred on 14 (12%) out of 118 monitored summer days. Days with high FIB levels when the swim area is open are also common but do relatively little economic harm in comparison. Also, even if the closure policy could be implemented daily and perfectly without error, only about 42% of predicted illnesses would be avoided. These conclusions were sensitive to the relative values and risk preferences that swimmers have for recreation access and avoiding health effects, suggesting a need for further study of the impacts of recreational water quality policies on individuals.
Blindness and cataract surgical services in Atsinanana region, Madagascar.
Randrianaivo, Jean-Baptiste; Anholt, R Michele; Tendrisoa, Diarimirindra Lazaharivony; Margiano, Nestor Jean; Courtright, Paul; Lewallen, Susan
2014-01-01
To assess the prevalence and causes of avoidable blindness in Atsinanana Region, Madagascar, with the Rapid Assessment of Avoidable Blindness (RAAB) survey. We analyzed the hospital records to supplement the findings for public health care planning. Only villages within a two-hour walk from a road, about half of the population of Atsinanana was included. Seventy-two villages were selected by population-proportional-to-size sampling. In each village, compact segment sampling was used to select 50 people over age 50 for eye examination using standard RAAB methods. Records at the two hospitals providing cataract surgery in the region were analyzed for information on patients who underwent cataract surgery in 2010. Cataract incidence rate and target cataract surgery rate (CSR) was modeled from age-specific prevalence of cataract. The participation rate was 87% and the sample prevalence of blindness was 1.96%. Cataract was responsible for 64% and 85.7% of blindness and severe visual impairment, respectively. Visual impairment was due to cataract (69.4%) and refractive error (14.1%). There was a strong positive correlation between cataract surgical rate by district and the proportion of people living within 2 hours of a road. There were marked differences in the profiles of the cataract patients at the two facilities. The estimated incidence of cataract at the 6/18 level was 2.4 eyes per 100 people over age 50 per year. Although the survey included only people with reasonable access, the main cause of visual impairment was still cataract. The incidence of cataract is such that it ought to be possible to eliminate it as a cause of visual impairment, but changes in service delivery at hospitals and strategies to improve access will be necessary for this change.
NASA Technical Reports Server (NTRS)
Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu
2015-05-01
This paper describes a switching formation strategy for multi-robots with velocity constraints to avoid and cross obstacles. In the strategy, a leader robot plans a safe path using the geometric obstacle avoidance control method (GOACM). By calculating new desired distances and bearing angles with the leader robot, the follower robots switch into a safe formation. With considering collision avoidance, a novel robot priority model, based on the desired distance and bearing angle between the leader and follower robots, is designed during the obstacle avoidance process. The adaptive tracking control algorithm guarantees that the trajectory and velocity tracking errors converge to zero. To demonstrate the validity of the proposed methods, simulation and experiment results present that multi-robots effectively form and switch formation avoiding obstacles without collisions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
What do reviewers look for in an original research article?
Shankar, P R
2012-01-01
In this article common errors committed by authors especially those, whose first language is not English, while writing an original research articleis described. Avoiding common errors and improving chances of publication has also been covered. This article may resemble instruction to the author. However, tips from reviewer's eyes has been given. The abstract is the section of the paper most commonly read and care should be taken while writing this section. Keywordsare usedto retrieve articles following searches and use of words from the MeSH database is recommended.The introduction describes work already conducted in the particular area and briefly mentions how the manuscript will add to the existing knowledge.The methods section describes how the study was conducted, is written in the past tense and is often the first part of the paper to be written. The results describe what was found in the study and is usually written after the methods section.The discussion compares the study with the literature and helps to put the study findings in context. The conclusions should be based on the results of the study. The references should be written strictly according to the journal format. Language should be simple, active voice should be used and jargon avoided. Avoid directly quoting from reference articles and paraphrase these in your own words to avoid plagiarism.
Recovering from execution errors in SIPE
NASA Technical Reports Server (NTRS)
Wilkins, D. E.
1987-01-01
In real-world domains (e.g., a mobile robot environment), things do not always proceed as planned, so it is important to develop better execution-monitoring techniques and replanning capabilities. These capabilities in the SIPE planning system are described. The motivation behind SIPE is to place enough limitations on the representation so that planning can be done efficiently, while retaining sufficient power to still be useful. This work assumes that new information given to the execution monitor is in the form of predicates, thus avoiding the difficult problem of how to generate these predicates from information provided by sensors. The replanning module presented here takes advantage of the rich structure of SIPE plans and is intimately connected with the planner, which can be called as a subroutine. This allows the use of SIPE's capabilities to determine efficiently how unexpected events affect the plan being executed and, in many cases, to retain most of the original plan by making changes in it to avoid problems caused by these unexpected events. SIPE is also capable of shortening the original plan when serendipitous events occur. A general set of replanning actions is presented along with a general replanning capability that has been implemented by using these actions.
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
Chiolero, Arnaud; Paccaud, Fred; Aujesky, Drahomir; Santschi, Valérie; Rodondi, Nicolas
2015-01-01
Overdiagnosis is the diagnosis of an abnormality that is not associated with a substantial health hazard and that patients have no benefit to be aware of. It is neither a misdiagnosis (diagnostic error), nor a false positive result (positive test in the absence of a real abnormality). It mainly results from screening, use of increasingly sensitive diagnostic tests, incidental findings on routine examinations, and widening diagnostic criteria to define a condition requiring an intervention. The blurring boundaries between risk and disease, physicians' fear of missing a diagnosis and patients' need for reassurance are further causes of overdiagnosis. Overdiagnosis often implies procedures to confirm or exclude the presence of the condition and is by definition associated with useless treatments and interventions, generating harm and costs without any benefit. Overdiagnosis also diverts healthcare professionals from caring about other health issues. Preventing overdiagnosis requires increasing awareness of healthcare professionals and patients about its occurrence, the avoidance of unnecessary and untargeted diagnostic tests, and the avoidance of screening without demonstrated benefits. Furthermore, accounting systematically for the harms and benefits of screening and diagnostic tests and determining risk factor thresholds based on the expected absolute risk reduction would also help prevent overdiagnosis.
Sessler, Daniel I; Shafer, Steven
2018-01-01
Clear writing makes manuscripts easier to understand. Clear writing enhances research reports, increasing clinical adoption and scientific impact. We discuss styles and organization to help junior investigators present their findings and avoid common errors.
NASA Astrophysics Data System (ADS)
Yoshino, R.; Nakamura, Y.; Neyatani, Y.
1997-08-01
In JT-60U a vertical displacement event (VDE) is observed during slow plasma current quench (Ip quench) for a vertically elongated divertor plasma with a single null. The VDE is generated by an error in the feedback control of the vertical position of the plasma current centre (ZJ). It has been perfectly avoided by improving the accuracy of the ZJ measurement in real time. Furthermore, plasma-wall interaction has been avoided successfully during slow Ip quench owing to the good performance of the plasma equilibrium control system
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
The influence of an ITER-like wall on disruptions at JET
NASA Astrophysics Data System (ADS)
de Vries, Peter
2013-10-01
Disruptions are a key issue for tokamaks such as ITER because the fast release of the high thermal and magnetic energies will result in large forces and heat loads. Hence, finding methods to avoid them or mitigate their impact is vital. The recent replacement of carbon tiles with a metallic ITER-like wall (ILW) has greatly increased the significance of disruptions for JET operations. This paper summarizes how the metallic wall influenced the disruption physics itself and its influence on the causes of disruptions. Tolerable heat loads on the ILW are reduced compared to the carbon wall because of the potential for melting. This is exacerbated by the fact that with the ILW, significantly less energy is radiated during the disruption and thus more energy is conducted to the wall. The lower radiation and thus higher temperatures also slow down the current decay, yielding larger vessel forces. Mitigation by massive gas injection had to be applied routinely in order to safely operate JET with the new wall. The start of operations with the ILW showed a marked rise in the average disruption rate from 3.4% to 10%, although in the last 2 weeks, H-mode operations with only 3.3% disruptions was achieved. The increased disruption rate can be attributed to the influence of the new wall on plasma operation and control, requiring adjustments of the established carbon-wall scenarios. A detailed survey of disruption causes will be presented, showing the improvements made to avoid various disruption classes, but also indicating those disruption types responsible for the enhanced disruption rate. The latter can be mainly attributed to disruptions due to too high core radiation but also due to density control issues and error field locked modes. Detailed technical and physics understanding of disruption causes is essential for devising optimum strategies to avoid or mitigate these events. This research was funded partly by the European Communities under the contract of Association between EURATOM and FOM, and was carried out within the framework of EFDA. The views and opinions expressed herein do not necessarily reflect those of the Europe.
Potential effects of reward and loss avoidance in overweight adolescents.
Reyes, Sussanne; Peirano, Patricio; Luna, Beatriz; Lozoff, Betsy; Algarín, Cecilia
2015-08-01
Reward system and inhibitory control are brain functions that exert an influence on eating behavior regulation. We studied the differences in inhibitory control and sensitivity to reward and loss avoidance between overweight/obese and normal-weight adolescents. We assessed 51 overweight/obese and 52 normal-weight 15-y-old Chilean adolescents. The groups were similar regarding sex and intelligence quotient. Using Antisaccade and Incentive tasks, we evaluated inhibitory control and the effect of incentive trials (neutral, loss avoidance, and reward) on generating correct and incorrect responses (latency and error rate). Compared to normal-weight group participants, overweight/obese adolescents showed shorter latency for incorrect antisaccade responses (186.0 (95% CI: 176.8-195.2) vs. 201.3 ms (95% CI: 191.2-211.5), P < 0.05) and better performance reflected by lower error rate in incentive trials (43.6 (95% CI: 37.8-49.4) vs. 53.4% (95% CI: 46.8-60.0), P < 0.05). Overweight/obese adolescents were more accurate on loss avoidance (40.9 (95% CI: 33.5-47.7) vs. 49.8% (95% CI: 43.0-55.1), P < 0.05) and reward (41.0 (95% CI: 34.5-47.5) vs. 49.8% (95% CI: 43.0-55.1), P < 0.05) compared to neutral trials. Overweight/obese adolescents showed shorter latency for incorrect responses and greater accuracy in reward and loss avoidance trials. These findings could suggest that an imbalance of inhibition and reward systems influence their eating behavior.
Zainudin, Suhaila; Arif, Shereena M.
2017-01-01
Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5. PMID:28250767
Years of Life Lost (YLL) in Colombia 1998-2011: Overall and Avoidable Causes of Death Analysis
Castillo-Rodríguez, Liliana; Díaz-Jiménez, Diana; Castañeda-Orjuela, Carlos; De la Hoz-Restrepo, Fernando
2015-01-01
Objective Estimate the Years of Life Lost (YLL) for overall and avoidable causes of death (CoD) in Colombia for the period 1998-2011. Methods From the reported deaths to the Colombian mortality database during 1998-2011, we classified deaths from avoidable causes. With the reference life table of the Global Burden of Disease (GBD) 2010 study, we estimated the overall YLL and YLL due to avoidable causes. Calculations were performed with the difference between life expectancy and the age of death. Results are reported by group of cause of death, events, sex, year and department. Comparative analysis between number of deaths and YLL was carried out. Results A total of 83,856,080 YLL were calculated in Colombia during period 1998-2011, 75.9% of them due to avoidable CoD. The year 2000 reported the highest number of missed YLL by both overall and avoidable CoD. The departments with the highest YLL rates were Caquetá, Guaviare, Arauca, Meta, and Risaralda. In men, intentional injuries and cardiovascular and circulatory diseases had the higher losses, while in women YLL were mainly due to cardiovascular and circulatory diseases. Conclusions The public health priorities should focus on preventing the loss of YLL due to premature death and differentiated interventions by sex. PMID:25942009
Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana
2016-10-15
Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard - fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline - second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing.
Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana
2016-01-01
Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard – fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline – second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing. PMID:27812301
Economic turmoil, new administration to affect revenue cycle in 2009.
2009-01-01
Healthcare revenue cycle leaders willface some pressing issues in 2009, including continuing economic turmoil, increasing numbers of underinsured patients, avoiding unreimbursable medical errors, and implementation of ICD-10.
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gladstone, D. J.; Li, S.; Jarvis, L. A.
2011-07-15
Purpose: The authors hereby notify the Radiation Oncology community of a potentially lethal error due to improper implementation of linear units of measure in a treatment planning system. The authors report an incident in which a patient was nearly mistreated during a stereotactic radiotherapy procedure due to inappropriate reporting of stereotactic coordinates by the radiation therapy treatment planning system in units of centimeter rather than in millimeter. The authors suggest a method to detect such errors during treatment planning so they are caught and corrected prior to the patient positioning for treatment on the treatment machine. Methods: Using pretreatment imaging,more » the authors found that stereotactic coordinates are reported with improper linear units by a treatment planning system. The authors have implemented a redundant, independent method of stereotactic coordinate calculation. Results: Implementation of a double check of stereotactic coordinates via redundant, independent calculation is simple and accurate. Use of this technique will avoid any future error in stereotactic treatment coordinates due to improper linear units, transcription, or other similar errors. Conclusions: The authors recommend an independent double check of stereotactic treatment coordinates during the treatment planning process in order to avoid potential mistreatment of patients.« less
Medical errors; causes, consequences, emotional response and resulting behavioral change
Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed
2016-01-01
Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682
Trajectory specification for high capacity air traffic control
NASA Technical Reports Server (NTRS)
Paielli, Russell A. (Inventor)
2010-01-01
Method and system for analyzing and processing information on one or more aircraft flight paths, using a four-dimensional coordinate system including three Cartesian or equivalent coordinates (x, y, z) and a fourth coordinate .delta. that corresponds to a distance estimated along a reference flight path to a nearest reference path location corresponding to a present location of the aircraft. Use of the coordinate .delta., rather than elapsed time t, avoids coupling of along-track error into aircraft altitude and reduces effects of errors on an aircraft landing site. Along-track, cross-track and/or altitude errors are estimated and compared with a permitted error bounding space surrounding the reference flight path.
Intrusion errors in visuospatial working memory performance.
Cornoldi, Cesare; Mammarella, Nicola
2006-02-01
This study tested the hypothesis that failure in active visuospatial working memory tasks involves a difficulty in avoiding intrusions due to information that is already activated. Two experiments are described, in which participants were required to process several series of locations on a 4 x 4 matrix and then to produce only the final location of each series. Results revealed a higher number of errors due to already activated locations (intrusions) compared with errors due to new locations (inventions). Moreover, when participants were required to pay extra attention to some irrelevant (non-final) locations by tapping on the table, intrusion errors increased. Results are discussed in terms of current models of working memory functioning.
Errors in imaging of traumatic injuries.
Scaglione, Mariano; Iaselli, Francesco; Sica, Giacomo; Feragalli, Beatrice; Nicola, Refky
2015-10-01
The advent of multi-detector computed tomography (MDCT) has drastically improved the outcomes of patients with multiple traumatic injuries. However, there are still diagnostic challenges to be considered. A missed or the delay of a diagnosis in trauma patients can sometimes be related to perception or other non-visual cues, while other errors are due to poor technique or poor image quality. In order to avoid any serious complications, it is important for the practicing radiologist to be cognizant of some of the most common types of errors. The objective of this article is to review the various types of errors in the evaluation of patients with multiple trauma injuries or polytrauma with MDCT.
Adverse Drug Events caused by Serious Medication Administration Errors
Sawarkar, Abhivyakti; Keohane, Carol A.; Maviglia, Saverio; Gandhi, Tejal K; Poon, Eric G
2013-01-01
OBJECTIVE To determine how often serious or life-threatening medication administration errors with the potential to cause patient harm (or potential adverse drug events) result in actual patient harm (or adverse drug events (ADEs)) in the hospital setting. DESIGN Retrospective chart review of clinical events that transpired following observed medication administration errors. BACKGROUND Medication errors are common at the medication administration stage for hospitalized patients. While many of these errors are considered capable of causing patient harm, it is not clear how often patients are actually harmed by these errors. METHODS In a previous study where 14,041 medication administrations in an acute-care hospital were directly observed, investigators discovered 1271 medication administration errors, of which 133 had the potential to cause serious or life-threatening harm to patients and were considered serious or life-threatening potential ADEs. In the current study, clinical reviewers conducted detailed chart reviews of cases where a serious or life-threatening potential ADE occurred to determine if an actual ADE developed following the potential ADE. Reviewers further assessed the severity of the ADE and attribution to the administration error. RESULTS Ten (7.5% [95% C.I. 6.98, 8.01]) actual adverse drug events or ADEs resulted from the 133 serious and life-threatening potential ADEs, of which 6 resulted in significant, three in serious, and one life threatening injury. Therefore 4 (3% [95% C.I. 2.12, 3.6]) serious and life threatening potential ADEs led to serious or life threatening ADEs. Half of the ten actual ADEs were caused by dosage or monitoring errors for anti-hypertensives. The life threatening ADE was caused by an error that was both a transcription and a timing error. CONCLUSION Potential ADEs at the medication administration stage can cause serious patient harm. Given previous estimates of serious or life-threatening potential ADE of 1.33 per 100 medication doses administered, in a hospital where 6 million doses are administered per year, about 4000 preventable ADEs would be attributable to medication administration errors annually. PMID:22791691
An Alternative Time Metric to Modified Tau for Unmanned Aircraft System Detect And Avoid
NASA Technical Reports Server (NTRS)
Wu, Minghong G.; Bageshwar, Vibhor L.; Euteneuer, Eric A.
2017-01-01
A new horizontal time metric, Time to Protected Zone, is proposed for use in the Detect and Avoid (DAA) Systems equipped by unmanned aircraft systems (UAS). This time metric has three advantages over the currently adopted time metric, modified tau: it corresponds to a physical event, it is linear with time, and it can be directly used to prioritize intruding aircraft. The protected zone defines an area around the UAS that can be a function of each intruding aircraft's surveillance measurement errors. Even with its advantages, the Time to Protected Zone depends explicitly on encounter geometry and may be more sensitive to surveillance sensor errors than modified tau. To quantify its sensitivity, simulation of 972 encounters using realistic sensor models and a proprietary fusion tracker is performed. Two sensitivity metrics, the probability of time reversal and the average absolute time error, are computed for both the Time to Protected Zone and modified tau. Results show that the sensitivity of the Time to Protected Zone is comparable to that of modified tau if the dimensions of the protected zone are adequately defined.
Locked-mode avoidance and recovery without momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.
2015-11-01
Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.
Verstappen, Wim; Gaal, Sander; Bowie, Paul; Parker, Diane; Lainer, Miriam; Valderas, Jose M; Wensing, Michel; Esmail, Aneez
2015-09-01
Healthcare can cause avoidable serious harm to patients. Primary care is not an exception, and the relative lack of research in this area lends urgency to a better understanding of patient safety, the future research agenda and the development of primary care oriented safety programmes. To outline a research agenda for patient safety improvement in primary care in Europe and beyond. The LINNEAUS collaboration partners analysed existing research on epidemiology and classification of errors, diagnostic and medication errors, safety culture, and learning for and improving patient safety. We discussed ideas for future research in several meetings, workshops and congresses with LINNEAUS collaboration partners, practising GPs, researchers in this field, and policy makers. This paper summarizes and integrates the outcomes of the LINNEAUS collaboration on patient safety in primary care. It proposes a research agenda on improvement strategies for patient safety in primary care. In addition, it provides background information to help to connect research in this field with practicing GPs and other healthcare workers in primary care. Future research studies should target specific primary care domains, using prospective methods and innovative methods such as patient involvement.
1983-02-23
We propose to amend the 1978 Medicaid regulations on intermediate care facility services for the mentally retarded and persons with related conditions to correct the definition of "persons with related conditions". This definition, because of an inadvertent error in 1978, is currently tied to the definition of developmental disability in the Developmental Disabilities Assistance and Bill of Rights Act (DDABRA) as amended in 1978. The DDABRA, as amended, covers the mentally ill. The 1978 regulations intended to make "no substantive change" to prior Medicaid regulations which did not cover the mentally ill. The cross-reference to the DDABRA produced the unintended result of incorporating into Medicaid regulations the revision to the definition of the developmentally disabled created by the 1978 amendments to the DDABRA and may tend to cause confusion about the kind of care that is covered by the Medicaid program. Therefore, a correction of this drafting error is necessary. To avoid results of this kind in the future this proposal would establish a Medicaid definition of conditions related to mental retardation that would meet specific needs of the Medicaid program and would be independent of the definition of developmental disability in the DDABRA.
Verstappen, Wim; Gaal, Sander; Bowie, Paul; Parker, Diane; Lainer, Miriam; Valderas, Jose M.; Wensing, Michel; Esmail, Aneez
2015-01-01
ABSTRACT Background: Healthcare can cause avoidable serious harm to patients. Primary care is not an exception, and the relative lack of research in this area lends urgency to a better understanding of patient safety, the future research agenda and the development of primary care oriented safety programmes. Objective: To outline a research agenda for patient safety improvement in primary care in Europe and beyond. Methods: The LINNEAUS collaboration partners analysed existing research on epidemiology and classification of errors, diagnostic and medication errors, safety culture, and learning for and improving patient safety. We discussed ideas for future research in several meetings, workshops and congresses with LINNEAUS collaboration partners, practising GPs, researchers in this field, and policy makers. Results: This paper summarizes and integrates the outcomes of the LINNEAUS collaboration on patient safety in primary care. It proposes a research agenda on improvement strategies for patient safety in primary care. In addition, it provides background information to help to connect research in this field with practicing GPs and other healthcare workers in primary care. Conclusion: Future research studies should target specific primary care domains, using prospective methods and innovative methods such as patient involvement. PMID:26339841
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.
Cheng, Jing; Xia, Linyuan
2016-08-31
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network
Cheng, Jing; Xia, Linyuan
2016-01-01
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756
Active listening: The key of successful communication in hospital managers.
Jahromi, Vahid Kohpeima; Tabatabaee, Seyed Saeed; Abdar, Zahra Esmaeili; Rajabi, Mahboobeh
2016-03-01
One of the important causes of medical errors and unintentional harm to patients is ineffective communication. The important part of this skill, in case it has been forgotten, is listening. The objective of this study was to determine whether managers in hospitals listen actively. This study was conducted between May and June 2014 among three levels of managers at teaching hospitals in Kerman, Iran. Active Listening skill among hospital managers was measured by self-made Active Listening Skill Scale (ALSS), which consists of the key elements of active listening and has five subscales, i.e., Avoiding Interruption, Maintaining Interest, Postponing Evaluation, Organizing Information, and Showing Interest. The data were analyzed by IBM-SPSS software, version 20, and the Pearson product-moment correlation coefficient, the chi-squared test, and multiple linear regressions. The mean score of active listening in hospital managers was 2.32 out of 3.The highest score (2.27) was obtained by the first-level managers, and the top managers got the lowest score (2.16). Hospital mangers were best in showing interest and worst in avoiding interruptions. The area of employment was a significant predictor of avoiding interruption and the managers' gender was a strong predictor of skill in maintaining interest (p < 0.05). The type of management and education can predict postponing evaluation, and the length of employment can predict showing interest (p < 0.05). There is a necessity for the development of strategies to create more awareness among the hospital managers concerning their active listening skills.
A Model for QoS – Aware Wireless Communication in Hospitals
Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza
2012-01-01
In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers’ error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations. PMID:23493832
A Model for QoS - Aware Wireless Communication in Hospitals.
Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza
2012-01-01
In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers' error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations.
Dodd, Zane; Warren, Ann Marie; Riggs, Shelley; Clark, Mike
2015-01-01
Background: Spinal cord injury (SCI) can cause psychological consequences that negatively affect quality of life. It is increasingly recognized that factors such as resilience and social support may produce a buffering effect and are associated with improved health outcomes. However the influence of adult attachment style on an individual’s ability to utilize social support after SCI has not been examined. Objective: The purpose of this study was to examine relationships between adult romantic attachment perceived social support depression and resilience in individuals with SCI. In addition we evaluated potential mediating effects of social support and adult attachment on resilience and depression. Methods: Participants included 106 adults with SCI undergoing inpatient rehabilitation. Individuals completed measures of adult attachment (avoidance and anxiety) social support resilience and depression. Path analysis was performed to assess for presence of mediation effects. Results: When accounting for the smaller sample size support was found for the model (comparative fit index = .927 chi square = 7.86 P = .01 β = -0.25 standard error [SE] = -2.93 P < .05). The mediating effect of social support on the association between attachment avoidance and resilience was the only hypothesized mediating effect found to be significant (β = -0.25 SE = -2.93 P < .05). Conclusion: Results suggest that individuals with SCI with higher levels of attachment avoidance have lower perceived social support which relates to lower perceived resilience. Assessing attachment patterns during inpatient rehabilitation may allow therapists to intervene to provide greater support. PMID:26364285
Gao, Yu; Shi, Lu
2015-08-21
To better understand the documented link between mindfulness and longevity, we examine the association between mindfulness and conscious avoidance of secondhand smoke (SHS), as well as the association between mindfulness and physical activity. In Shanghai University of Finance and Economics (SUFE) we surveyed a convenience sample of 1516 college freshmen. We measured mindfulness, weekly physical activity, and conscious avoidance of secondhand smoke, along with demographic and behavioral covariates. We used a multilevel logistic regression to test the association between mindfulness and conscious avoidance of secondhand smoke, and used a Tobit regression model to test the association between mindfulness and metabolic equivalent hours per week. In both models the home province of the student respondent was used as the cluster variable, and demographic and behavioral covariates, such as age, gender, smoking history, household registration status (urban vs. rural), the perceived smog frequency in their home towns, and the asthma diagnosis. The logistic regression of consciously avoiding SHS shows that a higher level of mindfulness was associated with an increase in the odds ratio of conscious SHS avoidance (logged odds: 0.22, standard error: 0.07, p < 0.01). The Tobit regression shows that a higher level of mindfulness was associated with more metabolic equivalent hours per week (Tobit coefficient: 4.09, standard error: 1.13, p < 0.001). This study is an innovative attempt to study the behavioral issue of secondhand smoke from the perspective of the potential victim, rather than the active smoker. The observed associational patterns here are consistent with previous findings that mindfulness is associated with healthier behaviors in obesity prevention and substance use. Research designs with interventions are needed to test the causal link between mindfulness and these healthy behaviors.
Gao, Yu; Shi, Lu
2015-01-01
Introduction: To better understand the documented link between mindfulness and longevity, we examine the association between mindfulness and conscious avoidance of secondhand smoke (SHS), as well as the association between mindfulness and physical activity. Method: In Shanghai University of Finance and Economics (SUFE) we surveyed a convenience sample of 1516 college freshmen. We measured mindfulness, weekly physical activity, and conscious avoidance of secondhand smoke, along with demographic and behavioral covariates. We used a multilevel logistic regression to test the association between mindfulness and conscious avoidance of secondhand smoke, and used a Tobit regression model to test the association between mindfulness and metabolic equivalent hours per week. In both models the home province of the student respondent was used as the cluster variable, and demographic and behavioral covariates, such as age, gender, smoking history, household registration status (urban vs. rural), the perceived smog frequency in their home towns, and the asthma diagnosis. Results: The logistic regression of consciously avoiding SHS shows that a higher level of mindfulness was associated with an increase in the odds ratio of conscious SHS avoidance (logged odds: 0.22, standard error: 0.07, p < 0.01). The Tobit regression shows that a higher level of mindfulness was associated with more metabolic equivalent hours per week (Tobit coefficient: 4.09, standard error: 1.13, p < 0.001). Discussion: This study is an innovative attempt to study the behavioral issue of secondhand smoke from the perspective of the potential victim, rather than the active smoker. The observed associational patterns here are consistent with previous findings that mindfulness is associated with healthier behaviors in obesity prevention and substance use. Research designs with interventions are needed to test the causal link between mindfulness and these healthy behaviors. PMID:26308029
Effect of visuospatial neglect on spatial navigation and heading after stroke.
Aravind, Gayatri; Lamontagne, Anouk
2017-06-09
Visuospatial neglect (VSN) impairs the control of locomotor heading in post-stroke individuals, which may affect their ability to safely avoid moving objects while walking. We aimed to compare VSN+ and VSN- stroke individuals in terms of changes in heading and head orientation in space while avoiding obstacles approaching from different directions and reorienting toward the final target. Stroke participants with VSN (VSN+) and without VSN (VSN-) walked in a virtual environment avoiding obstacles that approached contralesionally, head-on or ipsilesionally. Measures of obstacle avoidance (onset-of-heading change, maximum mediolateral deviation) and target alignment (heading and head-rotation errors with respect to target) were compared across groups and obstacle directions. In total, 26 participants with right-hemisphere stroke participated (13 VSN+ and 13 VSN-; 24 males; mean age 60.3 years, range 48 to 72 years). A larger proportion of VSN+ (75%) than VSN- (38%) participants collided with contralesional and head-on obstacles. For VSN- participants, deviating to the same side as the obstacle was a safe strategy to avoid diagonal obstacles and deviating to the opposite-side led to occasional collisions. VSN+ participants deviated ipsilesionally, displaying same-side and opposite-side strategies for ipsilesional and contralesional obstacles, respectively. Overall, VSN+ participants showed greater distances at onset-of-heading change, smaller maximum mediolateral deviation and larger errors in target alignment as compared with VSN- participants. The ipsilesional bias arising from VSN influences the modulation of heading in response to obstacles and, along with the adoption of the "riskier" strategies, contribute to the higher number colliders and poor goal-directed walking abilities in stroke survivors with VSN. Future research should focus on developing assessment and training tools for complex locomotor tasks such as obstacle avoidance in this population. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette
2012-01-01
Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.
Cleared for the visual approach: Human factor problems in air carrier operations
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.
Using media to teach how not to do psychotherapy.
Gabbard, Glen; Horowitz, Mardi
2010-01-01
This article describes how using media depictions of psychotherapy may help in teaching psychiatric residents. Using the HBO series In Treatment as a model, the authors suggest how boundary transgressions and technical errors may inform residents about optimal psychotherapeutic approaches. The psychotherapy vignettes depicted in In Treatment show how errors in judgment may grow out of therapists' good intentions. These errors can be understood and used constructively for teaching. With the growing interest in depicting psychotherapy on popular TV series, the use of these sessions avoids confidentiality problems and may be a useful adjunct for teaching psychotherapy.
32 CFR 1701.30 - Policy and applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... INTELLIGENCE ADMINISTRATION OF RECORDS UNDER THE PRIVACY ACT OF 1974 Routine Uses Applicable to More Than One... routine uses to foster simplicity and economy and to avoid redundancy or error by duplication in multiple...
ERIC Educational Resources Information Center
Kretchmer, Mark R.
2000-01-01
Discusses how to avoid costly errors in high-tech retrofits through proper planning and coordination. Guidelines are offered for selecting cable installers, using multi-disciplinary consulting engineering firm, and space planning when making high-tech retrofits. (GR)
Seven rules to avoid the tragedy of the commons.
Murase, Yohsuke; Baek, Seung Ki
2018-07-14
Cooperation among self-interested players in a social dilemma is fragile and easily interrupted by mistakes. In this work, we study the repeated n-person public-goods game and search for a strategy that forms a cooperative Nash equilibrium in the presence of implementation error with a guarantee that the resulting payoff will be no less than any of the co-players'. By enumerating strategic possibilities for n=3, we show that such a strategy indeed exists when its memory length m equals three. It means that a deterministic strategy can be publicly employed to stabilize cooperation against error with avoiding the risk of being exploited. We furthermore show that, for general n-person public-goods game, m ≥ n is necessary to satisfy the above criteria. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Interpretation of physiological indicators of motivation: Caveats and recommendations.
Richter, Michael; Slade, Kate
2017-09-01
Motivation scientists employing physiological measures to gather information about motivation-related states are at risk of committing two fundamental errors: overstating the inferences that can be drawn from their physiological measures and circular reasoning. We critically discuss two complementary approaches, Cacioppo and colleagues' model of psychophysiological relations and construct validation theory, to highlight the conditions under which these errors are committed and provide guidance on how to avoid them. In particular, we demonstrate that the direct inference from changes in a physiological measure to changes in a motivation-related state requires the demonstration that the measure is not related to other relevant psychological states. We also point out that circular reasoning can be avoided by separating the definition of the motivation-related state from the hypotheses that are empirically tested. Copyright © 2017 Elsevier B.V. All rights reserved.
Techniques for avoiding discrimination errors in the dynamic sampling of condensable vapors
NASA Technical Reports Server (NTRS)
Lincoln, K. A.
1983-01-01
In the mass spectrometric sampling of dynamic systems, measurements of the relative concentrations of condensable and noncondensable vapors can be significantly distorted if some subtle, but important, instrumental factors are overlooked. Even with in situ measurements, the condensables are readily lost to the container walls, and the noncondensables can persist within the vacuum chamber and yield a disproportionately high output signal. Where single pulses of vapor are sampled this source of error is avoided by gating either the mass spectrometer ""on'' or the data acquisition instrumentation ""on'' only during the very brief time-window when the initial vapor cloud emanating directly from the vapor source passes through the ionizer. Instrumentation for these techniques is detailed and its effectiveness is demonstrated by comparing gated and nongated spectra obtained from the pulsed-laser vaporization of several materials.
Skjerven-Martinsen; Naess, P A; Hansen, T B; Staff, T; Stray-Pedersen, A
2013-10-01
Restraint misuse and other occupant safety errors are the major cause of fatal and, severe injuries among child passengers in motor vehicle collisions. The main objectives of the present, study were to provide estimates of restraining practice among children younger than 16 years, traveling on Norwegian high-speed roads, and to uncover the high-risk groups associated with, restraint misuse and other safety errors. A cross-sectional observational study was performed in conjunction with regular traffic, control posts on high-speed roads. The seating and restraining of child occupants younger than 16, years were observed, the interior environment of the vehicles was examined, and a structured, interview of the driver was conducted according to a specific protocol. In total, 1260 child occupants aged 0-15 years were included in the study. Misuse of restraints, was observed in 38% of cases, with this being severe or critical in 24%. The presence of restraint, misuse varied significantly with age (p<0.001), with the frequency being highest among child, occupants in the age group 4-7 years. The most common error in this group was improperly routed, seat belts. The highest frequency of severe and critical errors was observed among child occupants in, the age group 0-3 years. The most common errors were loose or improperly routed harness straps and, incorrect installations of the child restraint system. Moreover, 24% of the children were seated in, vehicles with heavy, unsecured objects in the passenger compartment and/or the trunk that were, likely to move into the compartment upon impact and cause injury. No totally unrestrained children, were observed. This study provides a detailed description of the characteristics of restraint misuse and, the occupant's exposure to unsecured objects. Future education and awareness campaigns should, focus on children aged <8 years. The main challenges are to ensure correct routing and tightness of, harness straps and seat belts, correct installation of child restraints, and avoidance of premature, graduation from child restraints to seat belts only. Information campaigns should also advocate the use, of chest clips and address the potential risks of hard, heavy objects in the passenger compartment and, the importance of the placement and strapping of heavy objects in the trunk. Copyright © 2013 Elsevier Ltd. All rights reserved.
A new model of Ishikawa diagram for quality assessment
NASA Astrophysics Data System (ADS)
Liliana, Luca
2016-11-01
The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.
The spectrum of medical errors: when patients sue
Kels, Barry D; Grant-Kels, Jane M
2012-01-01
Inarguably medical errors constitute a serious, dangerous, and expensive problem for the twenty-first-century US health care system. This review examines the incidence, nature, and complexity of alleged medical negligence and medical malpractice. The authors hope this will constitute a road map to medical providers so that they can better understand the present climate and hopefully avoid the “Scylla and Charybdis” of medical errors and medical malpractice. Despite some documented success in reducing medical errors, adverse events and medical errors continue to represent an indelible stain upon the practice, reputation, and success of the US health care industry. In that regard, what may be required to successfully attack the unacceptably high severity and volume of medical errors is a locally directed and organized initiative sponsored by individual health care organizations that is coordinated, supported, and guided by state and federal governmental and nongovernmental agencies. PMID:22924008
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Hessian matrix approach for determining error field sensitivity to coil deviations
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
The First Rapid Assessment of Avoidable Blindness (RAAB) in Thailand
Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet
2014-01-01
Background The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. Methods A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. Results The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5–0.8), 1.3% (95% CI: 1.0–1.6), 12.6% (95% CI: 10.8–14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Conclusion Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system. PMID:25502762
Parallel computers - Estimate errors caused by imprecise data
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne
1991-01-01
A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.
Prevalence and Causes of Visual Loss Among the Indigenous Peoples of the World: A Systematic Review.
Foreman, Joshua; Keel, Stuart; van Wijngaarden, Peter; Bourne, Rupert A; Wormald, Richard; Crowston, Jonathan; Taylor, Hugh R; Dirani, Mohamed
2018-05-01
Studies have documented a higher disease burden in indigenous compared with nonindigenous populations, but no global data on the epidemiology of visual loss in indigenous peoples are available. A systematic review of literature on visual loss in the world's indigenous populations could identify major gaps and inform interventions to reduce their burden of visual loss. To conduct a systematic review on the prevalence and causes of visual loss among the world's indigenous populations. A search of databases and alternative sources identified literature on the prevalence and causes of visual loss (visual impairment and blindness) and eye diseases in indigenous populations. Studies from January 1, 1990, through August 1, 2017, that included clinical eye examinations of indigenous participants and, where possible, compared findings with those of nonindigenous populations were included. Methodologic quality of studies was evaluated to reveal gaps in the literature. Limited data were available worldwide. A total of 85 articles described 64 unique studies from 24 countries that examined 79 598 unique indigenous participants. Nineteen studies reported comparator data on 42 085 nonindigenous individuals. The prevalence of visual loss was reported in 13 countries, with visual impairment ranging from 0.6% in indigenous Australian children to 48.5% in native Tibetans 50 years or older. Uncorrected refractive error was the main cause of visual impairment (21.0%-65.1%) in 5 of 6 studies that measured presenting visual acuity. Cataract was the main cause of visual impairment in all 6 studies measuring best-corrected acuity (25.4%-72.2%). Cataract was the leading cause of blindness in 13 studies (32.0%-79.2%), followed by uncorrected refractive error in 2 studies (33.0% and 35.8%). Most countries with indigenous peoples do not have data on the burden of visual loss in these populations. Although existing studies vary in methodologic quality and reliability, they suggest that most visual loss in indigenous populations is avoidable. Improvements in quality and frequency of research into the eye health of indigenous communities appear to be required, and coordinated eye care programs should be implemented to specifically target the indigenous peoples of the world.
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1979-01-01
The partial coherence analysis method for noise source/path determination is summarized and the application to a two input, single output system with coherence between the inputs is illustrated. The augmentation of the calculations on a digital computer interfaced with a two channel, real time analyzer is also discussed. The results indicate possible sources of error in the computations and suggest procedures for avoiding these errors.
Comparison of two reconfigurable N×N interconnects for a recurrent neural network
NASA Astrophysics Data System (ADS)
Berger, Christoph; Collings, Neil; Pourzand, Ali R.; Volkel, Reinnard
1996-11-01
Two different methods of pattern replication (conventional and interlaced fan-out) have been investigated and experimentally tested in a reconfigurable 5X5 optical interconnect. Similar alignment problems due to imaging errors (field curvature) were observed in both systems. We conclude that of the two methods the interlaced fan-out is better suited to avoid these imaging errors, to reduce system size and to implement an optical feedback loop.
Problems affecting the fidelity of pressure measuring instruments for planetary probes
NASA Technical Reports Server (NTRS)
Hudson, J. B.
1972-01-01
Determination is made of the nature and magnitude of surface-related effects that cause errors in pressure measuring instruments, with special reference being made to instruments intended for use in planetary probes. The interaction of gases with clean surfaces of metals likely to be used as gage construction materials was studied. Special emphasis was placed on the adsorption, chemical reaction, and electron-induced desorption processes. The results indicated that all metals tested were subject to surface processes which would degrade gage fidelity. It was also found, however, that the formation of inert adsorbed layers on these metal surfaces, such as carbon on platinum, greatly reduced or eliminated these effects. This process, combined with a system design which avoids contact between reactive gases and hot filaments, appears to offer the most promising solution to the gage fidelity problem.
Physiological Factors Analysis in Unpressurized Aircraft Cabins
NASA Astrophysics Data System (ADS)
Patrao, Luis; Zorro, Sara; Silva, Jorge
2016-11-01
Amateur and sports flight is an activity with growing numbers worldwide. However, the main cause of flight incidents and accidents is increasingly pilot error, for a number of reasons. Fatigue, sleep issues and hypoxia, among many others, are some that can be avoided, or, at least, mitigated. This article describes the analysis of psychological and physiological parameters during flight in unpressurized aircraft cabins. It relates cerebral oximetry and heart rate with altitude, as well as with flight phase. The study of those parameters might give clues on which variations represent a warning sign to the pilot, thus preventing incidents and accidents due to human factors. Results show that both cerebral oximetry and heart rate change along the flight and altitude in the alert pilot. The impaired pilot might not reveal these variations and, if this is detected, he can be warned in time.
A novel disturbance-observer based friction compensation scheme for ball and plate system.
Wang, Yongkun; Sun, Mingwei; Wang, Zenghui; Liu, Zhongxin; Chen, Zengqiang
2014-03-01
Friction is often ignored when designing a controller for the ball and plate system, which can lead to steady-error and stick-slip phenomena, especially for the small amplitude command. It is difficult to achieve high-precision control performance for the ball and plate system because of its friction. A novel reference compensation strategy is presented to attenuate the aftereffects caused by the friction. To realize this strategy, a linear control law is proposed based on a reduced-order observer. Neither the accurate friction model nor the estimation of specific characteristic parameters is needed in this design. Moreover, the describing function method illustrates that the limit cycle can be avoided. Finally, the comparative mathematical simulations and the practical experiments are used to validate the effectiveness of the proposed method. © 2013 ISA Published by ISA All rights reserved.
Neural sensitivity to social deviance predicts attentive processing of peer-group judgment.
Schnuerch, Robert; Trautmann-Lengsfeld, Sina Alexa; Bertram, Mario; Gibbons, Henning
2014-01-01
The detection of one's deviance from social norms is an essential mechanism of individual adjustment to group behavior and, thus, for the perpetuation of norms within groups. It has been suggested that error signals in mediofrontal cortex provide the neural basis of such deviance detection, which contributes to later adjustment to the norm. In the present study, we used event-related potentials (ERPs) to demonstrate that, across participants, the strength of mediofrontal brain correlates of the detection of deviance from a peer group's norms was negatively related to attentive processing of the same group's judgments in a later task. We propose that an individual's perception of social deviance might bias basic cognitive processing during further interaction with the group. Strongly perceiving disagreement with a group could cause an individual to avoid or inhibit this group's judgments.
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Death Certification Errors and the Effect on Mortality Statistics.
McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth
Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.
Fairfield, Cameron; Penninga, Luit; Powell, James; Harrison, Ewen M; Wigmore, Stephen J
2018-04-09
Liver transplantation is an established treatment option for end-stage liver failure. Now that newer, more potent immunosuppressants have been developed, glucocorticosteroids may no longer be needed and their removal may prevent adverse effects. To assess the benefits and harms of glucocorticosteroid avoidance (excluding intra-operative use or treatment of acute rejection) or withdrawal versus glucocorticosteroid-containing immunosuppression following liver transplantation. We searched the Cochrane Hepato-Biliary Group Controlled Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded and Conference Proceedings Citation Index - Science, Literatura Americano e do Caribe em Ciencias da Saude (LILACS), World Health Organization International Clinical Trials Registry Platform, ClinicalTrials.gov, and The Transplant Library until May 2017. Randomised clinical trials assessing glucocorticosteroid avoidance or withdrawal versus glucocorticosteroid-containing immunosuppression for liver transplanted people. Our inclusion criteria stated that participants should have received the same co-interventions. We included trials that assessed complete glucocorticosteroid avoidance (excluding intra-operative use or treatment of acute rejection) versus short-term glucocorticosteroids, as well as trials that assessed short-term glucocorticosteroids versus long-term glucocorticosteroids. We used RevMan to conduct meta-analyses, calculating risk ratio (RR) for dichotomous variables and mean difference (MD) for continuous variables, both with 95% confidence intervals (CIs). We used a random-effects model and a fixed-effect model and reported both results where a discrepancy existed; otherwise we reported only the results from the fixed-effect model. We assessed the risk of systematic errors using 'Risk of bias' domains. We controlled for random errors by performing Trial Sequential Analysis. We presented our results in a 'Summary of findings' table. We included 17 completed randomised clinical trials, but only 16 studies with 1347 participants provided data for the meta-analyses. Ten of the 16 trials assessed complete postoperative glucocorticosteroid avoidance (excluding intra-operative use or treatment of acute rejection) versus short-term glucocorticosteroids (782 participants) and six trials assessed short-term glucocorticosteroids versus long-term glucocorticosteroids (565 participants). One additional study assessed complete post-operative glucocorticosteroid avoidance but could only be incorporated into qualitative analysis of the results due to limited data published in an abstract. All trials were at high risk of bias. Only eight trials reported on the type of donor used. Overall, we found no statistically significant difference for mortality (RR 1.15, 95% CI 0.93 to 1.44; low-quality evidence), graft loss including death (RR 1.15, 95% CI 0.90 to 1.46; low-quality evidence), or infection (RR 0.88, 95% CI 0.73 to 1.05; very low-quality evidence) when glucocorticosteroid avoidance or withdrawal was compared with glucocorticosteroid-containing immunosuppression. Acute rejection and glucocorticosteroid-resistant rejection were statistically significantly more frequent when glucocorticosteroid avoidance or withdrawal was compared with glucocorticosteroid-containing immunosuppression (RR 1.33, 95% CI 1.08 to 1.64; low-quality evidence; and RR 2.14, 95% CI 1.13 to 4.02; very low-quality evidence). Diabetes mellitus and hypertension were statistically significantly less frequent when glucocorticosteroid avoidance or withdrawal was compared with glucocorticosteroid-containing immunosuppression (RR 0.81, 95% CI 0.66 to 0.99; low-quality evidence; and RR 0.76, 95% CI 0.65 to 0.90; low-quality evidence). We performed Trial Sequential Analysis for all outcomes. None of the outcomes crossed the monitoring boundaries or reached the required information size. Hence, we cannot exclude random errors from the results of the conventional meta-analyses. Many of the benefits and harms of glucocorticosteroid avoidance or withdrawal remain uncertain because of the limited number of published randomised clinical trials, limited numbers of participants and outcomes, and high risk of bias in the trials. Glucocorticosteroid avoidance or withdrawal appears to reduce diabetes mellitus and hypertension whilst increasing acute rejection, glucocorticosteroid-resistant rejection, and renal impairment. We could identify no other benefits or harms of glucocorticosteroid avoidance or withdrawal. Glucocorticosteroid avoidance or withdrawal may be of benefit in selected patients, especially those at low risk of rejection and high risk of hypertension or diabetes mellitus. The optimal duration of glucocorticosteroid administration remains unclear. More randomised clinical trials assessing glucocorticosteroid avoidance or withdrawal are needed. These should be large, high-quality trials that minimise the risk of random and systematic error.
Transient fault behavior in a microprocessor: A case study
NASA Technical Reports Server (NTRS)
Duba, Patrick
1989-01-01
An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.
The current and ideal state of anatomic pathology patient safety.
Raab, Stephen Spencer
2014-01-01
An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.
A root cause analysis project in a medication safety course.
Schafer, Jason J
2012-08-10
To develop, implement, and evaluate team-based root cause analysis projects as part of a required medication safety course for second-year pharmacy students. Lectures, in-class activities, and out-of-class reading assignments were used to develop students' medication safety skills and introduce them to the culture of medication safety. Students applied these skills within teams by evaluating cases of medication errors using root cause analyses. Teams also developed error prevention strategies and formally presented their findings. Student performance was assessed using a medication errors evaluation rubric. Of the 211 students who completed the course, the majority performed well on root cause analysis assignments and rated them favorably on course evaluations. Medication error evaluation and prevention was successfully introduced in a medication safety course using team-based root cause analysis projects.
The Seven Deadly Sins of Online Microcomputing.
ERIC Educational Resources Information Center
King, Alan
1989-01-01
Offers suggestions for avoiding common errors in online microcomputer use. Areas discussed include learning the basics; hardware protection; backup options; hard disk organization; software selection; file security; and the use of dedicated communications lines. (CLB)
An Examination of the Causes and Solutions to Eyewitness Error
Wise, Richard A.; Sartori, Giuseppe; Magnussen, Svein; Safer, Martin A.
2014-01-01
Eyewitness error is one of the leading causes of wrongful convictions. In fact, the American Psychological Association estimates that one in three eyewitnesses make an erroneous identification. In this review, we look briefly at some of the causes of eyewitness error. We examine what jurors, judges, attorneys, law officers, and experts from various countries know about eyewitness testimony and memory, and if they have the requisite knowledge and skills to accurately assess eyewitness testimony. We evaluate whether legal safeguards such as voir dire, motion-to-suppress an identification, cross-examination, jury instructions, and eyewitness expert testimony are effective in identifying eyewitness errors. Lastly, we discuss solutions to eyewitness error. PMID:25165459
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp
2014-10-06
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less
Unreliable numbers: error and harm induced by bad design can be reduced by better design
Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul
2015-01-01
Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
Madani, Amin; Watanabe, Yusuke; Feldman, Liane S; Vassiliou, Melina C; Barkun, Jeffrey S; Fried, Gerald M; Aggarwal, Rajesh
2015-11-01
Bile duct injuries from laparoscopic cholecystectomy remain a significant source of morbidity and are often the result of intraoperative errors in perception, judgment, and decision-making. This qualitative study aimed to define and characterize higher-order cognitive competencies required to safely perform a laparoscopic cholecystectomy. Hierarchical and cognitive task analyses for establishing a critical view of safety during laparoscopic cholecystectomy were performed using qualitative methods to map the thoughts and practices that characterize expert performance. Experts with more than 5 years of experience, and who have performed at least 100 laparoscopic cholecystectomies, participated in semi-structured interviews and field observations. Verbal data were transcribed verbatim, supplemented with content from published literature, coded, thematically analyzed using grounded-theory by 2 independent reviewers, and synthesized into a list of items. A conceptual framework was created based on 10 interviews with experts, 9 procedures, and 18 literary sources. Experts included 6 minimally invasive surgeons, 2 hepato-pancreatico-biliary surgeons, and 2 acute care general surgeons (median years in practice, 11 [range 8 to 14]). One hundred eight cognitive elements (35 [32%] related to situation awareness, 47 [44%] involving decision-making, and 26 [24%] action-oriented subtasks) and 75 potential errors were identified and categorized into 6 general themes and 14 procedural tasks. Of the 75 potential errors, root causes were mapped to errors in situation awareness (24 [32%]), decision-making (49 [65%]), or either one (61 [81%]). This study defines the competencies that are essential to establishing a critical view of safety and avoiding bile duct injuries during laparoscopic cholecystectomy. This framework may serve as the basis for instructional design, assessment tools, and quality-control metrics to prevent injuries and promote a culture of patient safety. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Registration and fusion quantification of augmented reality based nasal endoscopic surgery.
Chu, Yakui; Yang, Jian; Ma, Shaodong; Ai, Danni; Li, Wenjie; Song, Hong; Li, Liang; Chen, Duanduan; Chen, Lei; Wang, Yongtian
2017-12-01
This paper quantifies the registration and fusion display errors of augmented reality-based nasal endoscopic surgery (ARNES). We comparatively investigated the spatial calibration process for front-end endoscopy and redefined the accuracy level of a calibrated endoscope by using a calibration tool with improved structural reliability. We also studied how registration accuracy was combined with the number and distribution of the deployed fiducial points (FPs) for positioning and the measured registration time. A physically integrated ARNES prototype was customarily configured for performance evaluation in skull base tumor resection surgery with an innovative approach of dynamic endoscopic vision expansion. As advised by surgical experts in otolaryngology, we proposed a hierarchical rendering scheme to properly adapt the fused images with the required visual sensation. By constraining the rendered sight in a known depth and radius, the visual focus of the surgeon can be induced only on the anticipated critical anatomies and vessel structures to avoid misguidance. Furthermore, error analysis was conducted to examine the feasibility of hybrid optical tracking based on point cloud, which was proposed in our previous work as an in-surgery registration solution. Measured results indicated that the error of target registration for ARNES can be reduced to 0.77 ± 0.07 mm. For initial registration, our results suggest that a trade-off for a new minimal time of registration can be reached when the distribution of five FPs is considered. For in-surgery registration, our findings reveal that the intrinsic registration error is a major cause of performance loss. Rigid model and cadaver experiments confirmed that the scenic integration and display fluency of ARNES are smooth, as demonstrated by three clinical trials that surpassed practicality. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of refractive error on temperament and character properties.
Kalkan Akcay, Emine; Canan, Fatih; Simavli, Huseyin; Dal, Derya; Yalniz, Hacer; Ugurlu, Nagihan; Gecici, Omer; Cagil, Nurullah
2015-01-01
To determine the effect of refractive error on temperament and character properties using Cloninger's psychobiological model of personality. Using the Temperament and Character Inventory (TCI), the temperament and character profiles of 41 participants with refractive errors (17 with myopia, 12 with hyperopia, and 12 with myopic astigmatism) were compared to those of 30 healthy control participants. Here, temperament comprised the traits of novelty seeking, harm-avoidance, and reward dependence, while character comprised traits of self-directedness, cooperativeness, and self-transcendence. Participants with refractive error showed significantly lower scores on purposefulness, cooperativeness, empathy, helpfulness, and compassion (P<0.05, P<0.01, P<0.05, P<0.05, and P<0.01, respectively). Refractive error might have a negative influence on some character traits, and different types of refractive error might have different temperament and character properties. These personality traits may be implicated in the onset and/or perpetuation of refractive errors and may be a productive focus for psychotherapy.
Forrester, Janet E
2015-12-01
Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
Potential effects of reward and loss avoidance in overweight adolescents
Reyes, Sussanne; Peirano, Patricio; Luna, Beatriz; Lozoff, Betsy; Algarín, Cecilia
2015-01-01
Background Reward system and inhibitory control are brain functions that exert an influence on eating behavior regulation. We studied the differences in inhibitory control and sensitivity to reward and loss avoidance between overweight/obese and normal-weight adolescents. Methods We assessed 51 overweight/obese and 52 normal-weight 15-y-old Chilean adolescents. The groups were similar regarding sex and intelligence quotient. Using Antisaccade and Incentive tasks, we evaluated inhibitory control and the effect of incentive trials (neutral, loss avoidance, and reward) on generating correct and incorrect responses (latency and error rate). Results Compared to normal-weight group participants, overweight/obese adolescents showed shorter latency for incorrect antisaccade responses (186.0 (95% CI: 176.8–195.2) vs. 201.3 ms (95% CI: 191.2–211.5), P < 0.05) and better performance reflected by lower error rate in incentive trials (43.6 (95% CI: 37.8–49.4) vs. 53.4% (95% CI: 46.8–60.0), P < 0.05). Overweight/obese adolescents were more accurate on loss avoidance (40.9 (95% CI: 33.5–47.7) vs. 49.8% (95% CI: 43.0–55.1), P < 0.05) and reward (41.0 (95% CI: 34.5–47.5) vs. 49.8% (95% CI: 43.0–55.1), P < 0.05) compared to neutral trials. Conclusion Overweight/obese adolescents showed shorter latency for incorrect responses and greater accuracy in reward and loss avoidance trials. These findings could suggest that an imbalance of inhibition and reward systems influence their eating behavior. PMID:25927543
An auxiliary frequency tracking system for general purpose lock-in amplifiers
NASA Astrophysics Data System (ADS)
Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu
2018-04-01
Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.
A knowledge-based system design/information tool for aircraft flight control systems
NASA Technical Reports Server (NTRS)
Mackall, Dale A.; Allen, James G.
1991-01-01
Research aircraft have become increasingly dependent on advanced electronic control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objective. This integration is being accomplished through electronic control systems. Systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary object is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences are reviewed of three highly complex, integrated aircraft programs: the X-29 forward swept wing; the advanced fighter technology integration (AFTI) F-16; and the highly maneuverable aircraft technology (HiMAT) program. Significant operating technologies, and the design errors which cause them, is examined to help identify what functions a system design/informatin tool should provide to assist designers in avoiding errors.
Directional control-response relationships for mining equipment.
Burgess-Limerick, R; Krupenia, V; Wallis, G; Pratim-Bannerjee, A; Steiner, L
2010-06-01
A variety of directional control-response relationships are currently found in mining equipment. Two experiments were conducted in a virtual environment to determine optimal direction control-response relationships in a wide variety of circumstances. Direction errors were measured as a function of control orientation (horizontal or vertical), location (left, front, right) and directional control-response relationships. The results confirm that the principles of consistent direction and visual field compatibility are applicable to the majority of situations. An exception is that fewer direction errors were observed when an upward movement of a horizontal lever or movement of a vertical lever away from the participants caused extension (lengthening) of the controlled device, regardless of whether the direction of movement of the control is consistent with the direction in which the extension occurs. Further, both the control of slew by horizontally oriented controls and the control of device movements in a frontal plane by the perpendicular movements of vertical levers were associated with relatively high rates of directional errors, regardless of the directional control-response relationship, and these situations should be avoided. STATEMENT OF RELEVANCE: The results are particularly applicable to the design of mining equipment such as drilling and bolting machines, and have been incorporated into MDG35.1 Guideline for bolting & drilling plant in mines (Industry & Investment NSW, 2010). The results are also relevant to the design of any equipment where vertical or horizontal levers are used to control the movement of equipment appendages, e.g. cranes mounted to mobile equipment and the like.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki
Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less
Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou
2017-03-15
High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.
Bollen, Kenneth A; Noble, Mark D; Adair, Linda S
2013-07-30
The fetal origins hypothesis emphasizes the life-long health impacts of prenatal conditions. Birth weight, birth length, and gestational age are indicators of the fetal environment. However, these variables often have missing data and are subject to random and systematic errors caused by delays in measurement, differences in measurement instruments, and human error. With data from the Cebu (Philippines) Longitudinal Health and Nutrition Survey, we use structural equation models, to explore random and systematic errors in these birth outcome measures, to analyze how maternal characteristics relate to birth outcomes, and to take account of missing data. We assess whether birth weight, birth length, and gestational age are influenced by a single latent variable that we call favorable fetal growth conditions (FFGC) and if so, which variable is most closely related to FFGC. We find that a model with FFGC as a latent variable fits as well as a less parsimonious model that has birth weight, birth length, and gestational age as distinct individual variables. We also demonstrate that birth weight is more reliably measured than is gestational age. FFGCs were significantly influenced by taller maternal stature, better nutritional stores indexed by maternal arm fat and muscle area during pregnancy, higher birth order, avoidance of smoking, and maternal age 20-35 years. Effects of maternal characteristics on newborn weight, length, and gestational age were largely indirect, operating through FFGC. Copyright © 2013 John Wiley & Sons, Ltd.
Yáñez, Jaime A; Remsberg, Connie M; Sayre, Casey L; Forrest, M Laird; Davies, Neal M
2011-01-01
Flip-flop pharmacokinetics is a phenomenon often encountered with extravascularly administered drugs. Occurrence of flip-flop spans preclinical to human studies. The purpose of this article is to analyze both the pharmacokinetic interpretation errors and opportunities underlying the presence of flip-flop pharmacokinetics during drug development. Flip-flop occurs when the rate of absorption is slower than the rate of elimination. If it is not recognized, it can create difficulties in the acquisition and interpretation of pharmacokinetic parameters. When flip-flop is expected or discovered, a longer duration of sampling may be necessary in order to avoid overestimation of fraction of dose absorbed. Common culprits of flip-flop disposition are modified dosage formulations; however, formulation characteristics such as the drug chemical entities themselves or the incorporated excipients can also cause the phenomenon. Yet another contributing factor is the physiological makeup of the extravascular site of administration. In this article, these causes of flip-flop pharmacokinetics are discussed with incorporation of relevant examples and the implications for drug development outlined. PMID:21837267
Zhu, Shaoyin; Li, Minjie; Sheng, Lan; Chen, Peng; Zhang, Yumo; Zhang, Sean Xiao-An
2012-12-07
A spirooxazine derivative 2-nitro-5a-(2-(4-dimethylaminophenyl)-ethylene)-6,6-dimethyl-5a,6-dihydro-12H-indolo[2,1-b][1,3]benzooxazine (P1) was explored as a sensitive cyanide probe. Different from conventional spiropyrans, P1 avoided locating the 3H-indolium cation and the 4-nitrophenolate anion in the same conjugated structure, which enhanced the positive charge of 3H-indolium cation so that the sensitivity and reaction speed were improved highly. UV-visible difference spectroscopy using P1 detection solution as a timely reference improved the measurement accuracy, prevented the error caused by the inherent absorption change of P1 solution with time. This enabled the "positive-negative alternative absorption peaks" in difference spectrum to be used as a finger-print to distinguish whether the spectral change was caused by cyanide. Benefiting from the special design of the molecular structure and the strategy of difference spectroscopy, P1 showed high selectivity and sensitivity for CN(-). A detection limit of 0.4 μM and a rate constant of 1.1 s(-1) were achieved.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
Avoidance of selenium-treated food by mallards
Heinz, G.H.; Sanderson, C.J.
1990-01-01
Adult, male mallards (Anas platyrhynchos) were given a choice between a control diet and a diet containing 5, 10 or 20 ppm selenium as selenomethionine dissolved in water and mixed into the diet. At 10 and 20 ppm, selenium-treated diets were avoided. Avoidance appeared to be caused by a conditioned response, probably to illness caused by the selenium and not to an aversion to the taste of the selenium.
Teachers Avoiding Learners' Avoidance: Is It Possible?
ERIC Educational Resources Information Center
Tadayyon, Maedeh; Zarrinabadi, Nourollah; Ketabi, Saeed
2016-01-01
Dealing with learners who prefer to take the back seat and avoid classroom participation can be every teacher's nightmare. This lack of participation may cause teacher frustration, and possibly the only way to reduce this lack of participation is to access the concept of avoidance strategy. Avoidance strategy is the abandonment of a classroom task…
Gorgot, Luis Ramon Marques da Rocha; Santos, Iná; Valle, Neiva; Matijasevich, Alicia; Matisajevich, Alicia; Barros, Aluisio J D; Albernaz, Elaine
2011-04-01
To describe avoidable deaths of children from the 2004 Pelotas Birth Cohort. The death of 92 children between 2004/2008 from Pelotas Birth Cohort were identified and classified according to the Brazilian List of Avoidable Causes of Mortality of Brazilian Unified Healthcare System. The Mortality Information System (SIM) for the State of Rio Grande do Sul (Southern Brazil) and the city of Pelotas were screened to search for deaths that occurred outside the city, as well as causes of deaths after the 1st year. Causes of infant deaths (<1 year of age) were compared between information from a sub-study and SIM. Mortality coefficients per 1,000 LB and proportional mortality for avoidable causes, including by type of health facility (traditional or Family Health Strategy) were calculated. The mortality coefficient was 22.2/ 1,000 LB, 82 the deaths occurred in the first year of life (19.4/1,000LB), and these included 37 (45%) in the first week. More than ¾ of the deaths (70/92) were avoidable. In infancy, according to the sub-study, the majority (42/82) could be prevented through adequate care of the woman during pregnancy; according to SIM, the majority could have been prevented through adequate newborn care (32/82). There was no difference in the proportion of avoidable deaths by type of health facility. The proportion of avoidable deaths is high. The quality of death certificate registries needs improvement so that avoidable deaths can be employed as an indicator to monitor maternal and child health care.
Effect of harmane, an endogenous β-carboline, on learning and memory in rats.
Celikyurt, Ipek Komsuoglu; Utkan, Tijen; Gocmez, Semil Selcen; Hudson, Alan; Aricioglu, Feyza
2013-01-01
Our aim was to investigate the effects of acute harmane administration upon learning and memory performance of rats using the three-panel runway paradigm and passive avoidance test. Male rats received harmane (2.5, 5, and 7.5mg/kg, i.p.) or saline 30 min. before each session of experiments. In the three panel runway paradigm, harmane did not affect the number of errors and latency in reference memory. The effect of harmane on the errors of working memory was significantly higher following the doses of 5mg/kg and 7.5mg/kg. The latency was changed significantly at only 7.5mg/kg in comparison to control group. Animals were given pre-training injection of harmane in the passive avoidance test in order to determine the learning function. Harmane treatment decreased the retention latency significantly and dose dependently, which indicates an impairment in learning. In this study, harmane impaired working memory in three panel runway test and learning in passive avoidance test. As an endogenous bioactive molecule, harmane might have a critical role in the modulation of learning and memory functions. Copyright © 2012 Elsevier Inc. All rights reserved.
Adaptive control of nonlinear system using online error minimum neural networks.
Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei
2016-11-01
In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Gray, Rob; Orn, Anders; Woodman, Tim
2017-02-01
Are pressure-induced performance errors in experts associated with novice-like skill execution (as predicted by reinvestment/conscious processing theories) or expert execution toward a result that the performer typically intends to avoid (as predicted by ironic processes theory)? The present study directly compared these predictions using a baseball pitching task with two groups of experienced pitchers. One group was shown only their target, while the other group was shown the target and an ironic (avoid) zone. Both groups demonstrated significantly fewer target hits under pressure. For the target-only group, this was accompanied by significant changes in expertise-related kinematic variables. In the ironic group, the number of pitches thrown in the ironic zone was significantly higher under pressure, and there were no significant changes in kinematics. These results suggest that information about an opponent can influence the mechanisms underlying pressure-induced performance errors.
Use of dual coolant displacing media for in-process optical measurement of form profiles
NASA Astrophysics Data System (ADS)
Gao, Y.; Xie, F.
2018-07-01
In-process measurement supports feedback control to reduce workpiece surface form error. Without it, the workpiece surface must be measured offline causing significant errors in workpiece positioning and reduced productivity. To offer better performance, a new in-process optical measurement method based on the use of dual coolant displacing media is proposed and studied, which uses an air and liquid phase together to resist coolant and to achieve in-process measurement. In the proposed new design, coolant is used to replace the previously used clean water to avoid coolant dilution. Compared with the previous methods, the distance between the applicator and the workpiece surface can be relaxed to 1 mm. The result is 4 times larger than before, thus permitting measurement of curved surfaces. The use of air is up to 1.5 times less than the best method previously available. For a sample workpiece with curved surfaces, the relative error of profile measurement under coolant conditions can be as small as 0.1% compared with the one under no coolant conditions. Problems in comparing measured 3D surfaces are discussed. A comparative study between a Bruker Npflex optical profiler and the developed new in-process optical profiler was conducted. For a surface area of 5.5 mm × 5.5 mm, the average measurement error under coolant conditions is only 0.693 µm. In addition, the error due to the new method is only 0.10 µm when compared between coolant and no coolant conditions. The effect of a thin liquid film on workpiece surface is discussed. The experimental results show that the new method can successfully solve the coolant dilution problem and is able to accurately measure the workpiece surface whilst fully submerged in the opaque coolant. The proposed new method is advantageous and should be very useful for in-process optical form profile measurement in precision machining.
Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-04-13
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wijesooriya, K; Seitter, K; Desai, V
Purpose: To present our single institution experience on catching errors with trajectory log file analysis. The reported causes of failures, probability of occurrences (O), severity of effects (S), and the probability of the failures to be undetected (D) could be added to guidelines of FMEA analysis. Methods: From March 2013 to March 2014, 19569 patient treatment fields/arcs were analyzed. This work includes checking all 131 treatment delivery parameters for all patients, all treatment sites and all treatment delivery fractions. TrueBeam trajectory log files for all treatment field types as well as all imaging types were accessed, read in every 20ms,more » and every control point (total of 37 million parameters) compared to the physician approved plan in the planning system. Results: Couch angle outlier occurrence: N= 327, range = −1.7 −1.2 deg; gantry angle outlier occurrence: N =59, range = 0.09 – 5.61 deg, collimator angle outlier occurrence: N = 13, range = −0.2 – 0.2 deg. VMAT cases have slightly larger variations in mechanical parameters. MLC: 3D single control point fields have a maximum deviation of 0.04 mm, 39 step and shoot IMRT cases have MLC −0.3 – 0.5 mm deviations, all (1286) VMAT cases have −0.9 – 0.7 mm deviations. Two possible serious errors were found: 1) A 4 cm isocenter shift for the PA beam of an AP-PA pair, under-dosing a portion of PTV by 25%. 2) Delivery with MLC leaves abutted behind the jaws as opposed to the midline as planned, leading to a under-dosing of a small volume of the PTV by 25%, by just the boost plan. Due to their error origin, neither of these errors could have been detected by pre-treatment verification. Conclusion: Performing Trajectory Log file analysis could catch typically undetected errors to avoid potentially adverse incidents.« less
Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-01-01
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836
Risks and injuries in laser and high-frequency applications
NASA Astrophysics Data System (ADS)
Giering, K.; Philipp, Carsten M.; Berlien, Hans-Peter
1995-01-01
An analysis of injuries and risks using high frequency (HF) and lasers in medicine based on a literature search with MEDLINE was performed. The cases reported in the literature were classified according to the following criteria: (1) Avoidable in an optimal operational procedure. These kind of injuries are caused by a chain of unfortunate incidents. They are in principle avoidable by the 'right action at the right time' which presupposes an appropriate training of the operating team, selection of the optimal parameters for procedure and consideration of all safety instructions. (2) Avoidable, caused by malfunction of the equipment and/or accessories. The injuries classified into this group are avoidable if all safety regulations were fulfilled. This includes a pre-operational check-up and the use of medical lasers and high frequency devices only which meet the international safety standards. (3) Avoidable, caused by misuse/mistake. Injuries of this group were caused by an inappropriate selection of the procedure, wrong medical indication or mistakes during application. (4) Unavoidable, fateful. These injuries can be caused by risks inherent to the type of energy used, malfunction of the equipment and/or accessories though a pre-operational check-up was done. Some risks and complications are common to high frequency and laser application. But whereas these risks can be excluded easily in laser surgery there is often a great expenditure necessary or they are not avoidable if high frequency if used. No unavoidable risks due to laser energy occur.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Risk prediction and aversion by anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2007-12-01
The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.
... meal will go away when you digest the food. Eating smaller amounts will help prevent swelling. For a swollen abdomen caused by swallowing air: Avoid carbonated beverages. Avoid chewing gum or sucking on candies. Avoid ...
Errors Analysis of Students in Mathematics Department to Learn Plane Geometry
NASA Astrophysics Data System (ADS)
Mirna, M.
2018-04-01
This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.
Can we improve patient safety?
Corbally, Martin Thomas
2014-01-01
Despite greater awareness of patient safety issues especially in the operating room and the widespread implementation of surgical time out World Health Organization (WHO), errors, especially wrong site surgery, continue. Most such errors are due to lapses in communication where decision makers fail to consult or confirm operative findings but worryingly where parental concerns over the planned procedure are ignored or not followed through. The WHO Surgical Pause/Time Out aims to capture these errors and prevent them, but the combination of human error and complex hospital environments can overwhelm even robust safety structures and simple common sense. Parents are the ultimate repository of information on their child's condition and planned surgery but are traditionally excluded from the process of Surgical Pause and Time Out, perhaps to avoid additional stress. In addition, surgeons, like pilots, are subject to the phenomenon of "plan-continue-fail" with potentially disastrous outcomes. If we wish to improve patient safety during surgery and avoid wrong site errors then we must include parents in the Surgical Pause/Time Out. A recent pilot study has shown that neither staff nor parents found it added to their stress, but, moreover, 100% of parents considered that it should be a mandatory component of the Surgical Pause nor does it add to the stress of surgery. Surgeons should be required to confirm that the planned procedure is in keeping with the operative findings especially in extirpative surgery and this "step back" should be incorporated into the standard Surgical Pause. It is clear that we must improve patient safety further and these simple measures should add to that potential.
Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val
2009-01-01
Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.
Do calculation errors by nurses cause medication errors in clinical practice? A literature review.
Wright, Kerri
2010-01-01
This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor calculation skills. Of the 33 studies reviewed only five articles specifically recorded information relating to calculation errors and only two of these detected errors using the direct observational approach. The literature suggests that there are other more pressing aspects of nurses' preparation and administration of medications which are contributing to medication errors in practice that require more urgent attention and calls into question the current focus on calculation and numeracy skills of pre registration and qualified nurses (NMC 2008). However, more research is required into the calculation errors in practice. In particular there is a need for a direct observational study on paediatric nurses as there are presently none examining this area of practice.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
2018-02-01
Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.
Global Warming Estimation From Microwave Sounding Unit
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.
1998-01-01
Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
... for meals. Chew food carefully and completely. Avoid arguments during meals. Avoid excitement or exercise right after a meal. Relax and get rest if indigestion is caused by stress. Avoid aspirin and other NSAIDs. If you must ...
Márki-Zay, János; Klein, Christoph L; Gancberg, David; Schimmel, Heinz G; Dux, László
2009-04-01
Depending on the method used, rare sequence variants adjacent to the single nucleotide polymorphism (SNP) of interest may cause unusual or erroneous genotyping results. Because such rare variants are known for many genes commonly tested in diagnostic laboratories, we organized a proficiency study to assess their influence on the accuracy of reported laboratory results. Four external quality control materials were processed and sent to 283 laboratories through 3 EQA organizers for analysis of the prothrombin 20210G>A mutation. Two of these quality control materials contained sequence variants introduced by site-directed mutagenesis. One hundred eighty-nine laboratories participated in the study. When samples gave a usual result with the method applied, the error rate was 5.1%. Detailed analysis showed that more than 70% of the failures were reported from only 9 laboratories. Allele-specific amplification-based PCR had a much higher error rate than other methods (18.3% vs 2.9%). The variants 20209C>T and [20175T>G; 20179_20180delAC] resulted in unusual genotyping results in 67 and 85 laboratories, respectively. Eighty-three (54.6%) of these unusual results were not recognized, 32 (21.1%) were attributed to technical issues, and only 37 (24.3%) were recognized as another sequence variant. Our findings revealed that some of the participating laboratories were not able to recognize and correctly interpret unusual genotyping results caused by rare SNPs. Our study indicates that the majority of the failures could be avoided by improved training and careful selection and validation of the methods applied.
Wu, Yifei; Thibos, Larry N; Candy, T Rowan
2018-05-07
Eccentric photorefraction and Purkinje image tracking are used to estimate refractive state and eye position simultaneously. Beyond vision screening, they provide insight into typical and atypical visual development. Systematic analysis of the effect of refractive error and spectacles on photorefraction data is needed to gauge the accuracy and precision of the technique. Simulation of two-dimensional, double-pass eccentric photorefraction was performed (Zemax). The inward pass included appropriate light sources, lenses and a single surface pupil plane eye model to create an extended retinal image that served as the source for the outward pass. Refractive state, as computed from the luminance gradient in the image of the pupil captured by the model's camera, was evaluated for a range of refractive errors (-15D to +15D), pupil sizes (3 mm to 7 mm) and two sets of higher-order monochromatic aberrations. Instrument calibration was simulated using -8D to +8D trial lenses at the spectacle plane for: (1) vertex distances from 3 mm to 23 mm, (2) uncorrected and corrected hyperopic refractive errors of +4D and +7D, and (3) uncorrected and corrected astigmatism of 4D at four different axes. Empirical calibration of a commercial photorefractor was also compared with a wavefront aberrometer for human eyes. The pupil luminance gradient varied linearly with refractive state for defocus less than approximately 4D (5 mm pupil). For larger errors, the gradient magnitude saturated and then reduced, leading to under-estimation of refractive state. Additional inaccuracy (up to 1D for 8D of defocus) resulted from spectacle magnification in the pupil image, which would reduce precision in situations where vertex distance is variable. The empirical calibration revealed a constant offset between the two clinical instruments. Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Guoxin, Cheng
2015-01-01
In recent years, several calibration-independent transmission/reflection methods have been developed to determine the complex permittivity of liquid materials. However, these methods experience their own respective defects, such as the requirement of multi measurement cells, or the presence of air gap effect. To eliminate these drawbacks, a fast calibration-independent method is proposed in this paper. There are two main advantages of the present method over those in the literature. First, only one measurement cell is required. The cell is measured when it is empty and when it is filled with liquid. This avoids the air gap effect in the approach, in which the structure with two reference ports connected with each other is needed to be measured. Second, it eliminates the effects of uncalibrated coaxial cables, adaptors, and plug sections; systematic errors caused by the experimental setup are avoided by the wave cascading matrix manipulations. Using this method, three dielectric reference liquids, i.e., ethanol, ethanediol, and pure water, and low-loss transformer oil are measured over a wide frequency range to validate the proposed method. Their accuracy is assessed by comparing the results with those obtained from the other well known techniques. It is demonstrated that this proposed method can be used as a robust approach for fast complex permittivity determination of liquid materials.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
A STUDY ON REASONS OF ERRORS OF OLD SURVEY MAPS IN CADASTRAL SYSTEM
NASA Astrophysics Data System (ADS)
Yanase, Norihiko
This paper explicates sources on survey map errors which were made in 19th century. The present cadastral system stands on registers and survey maps which were compiled to change the land taxation system in the Meiji era. Many Japanese may recognize the reasons why poor survey technique by farmers, too long measure to avoid heavy tax, careless official check and other deception made such errors of acreage from several to more than ten percent of area in survey maps. The author would like to maintain that such errors, called nawa-nobi, were lawful in accordance with the then survey regulation because of results to analyze old survey regulations, history of making maps and studies of cadastral system. In addition to, a kind of survey maps' errors should be pointed out a reason why the easy subdivision system which could approve without real survey and disposal of state property with inadequate survey.
Runway safety : it's everybody's business
DOT National Transportation Integrated Search
2001-07-01
This booklet tells pilots and controllers what they can do to help prevent runway incursions by helping them to avoid situations that reduce errors and alerting them to situations as extra vigilance is required. It also provides information on how co...
Haghighi, Mohammad Hosein Hayavi; Dehghani, Mohammad; Teshnizi, Saeid Hoseini; Mahmoodi, Hamid
2014-01-01
Accurate cause of death coding leads to organised and usable death information but there are some factors that influence documentation on death certificates and therefore affect the coding. We reviewed the role of documentation errors on the accuracy of death coding at Shahid Mohammadi Hospital (SMH), Bandar Abbas, Iran. We studied the death certificates of all deceased patients in SMH from October 2010 to March 2011. Researchers determined and coded the underlying cause of death on the death certificates according to the guidelines issued by the World Health Organization in Volume 2 of the International Statistical Classification of Diseases and Health Related Problems-10th revision (ICD-10). Necessary ICD coding rules (such as the General Principle, Rules 1-3, the modification rules and other instructions about death coding) were applied to select the underlying cause of death on each certificate. Demographic details and documentation errors were then extracted. Data were analysed with descriptive statistics and chi square tests. The accuracy rate of causes of death coding was 51.7%, demonstrating a statistically significant relationship (p=.001) with major errors but not such a relationship with minor errors. Factors that result in poor quality of Cause of Death coding in SMH are lack of coder training, documentation errors and the undesirable structure of death certificates.
Dachman, Abraham H.; Wroblewski, Kristen; Vannier, Michael W.; Horne, John M.
2014-01-01
Computed tomography (CT) colonography is a screening modality used to detect colonic polyps before they progress to colorectal cancer. Computer-aided detection (CAD) is designed to decrease errors of detection by finding and displaying polyp candidates for evaluation by the reader. CT colonography CAD false-positive results are common and have numerous causes. The relative frequency of CAD false-positive results and their effect on reader performance on the basis of a 19-reader, 100-case trial shows that the vast majority of CAD false-positive results were dismissed by readers. Many CAD false-positive results are easily disregarded, including those that result from coarse mucosa, reconstruction, peristalsis, motion, streak artifacts, diverticulum, rectal tubes, and lipomas. CAD false-positive results caused by haustral folds, extracolonic candidates, diminutive lesions (<6 mm), anal papillae, internal hemorrhoids, varices, extrinsic compression, and flexural pseudotumors are almost always recognized and disregarded. The ileocecal valve and tagged stool are common sources of CAD false-positive results associated with reader false-positive results. Nondismissable CAD soft-tissue polyp candidates larger than 6 mm are another common cause of reader false-positive results that may lead to further evaluation with follow-up CT colonography or optical colonoscopy. Strategies for correctly evaluating CAD polyp candidates are important to avoid pitfalls from common sources of CAD false-positive results. ©RSNA, 2014 PMID:25384290
Andújar-Montoya, María Dolores
2017-01-01
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of the main causes of such anomalies, which end in building defects, is the lack of control over the physical variables of the work environment during the execution of tasks. Therefore, the high percentage of defects detected in buildings that have the root cause in the construction phase could be avoidable with a more accurate and efficient control of the process. The present work proposes a novel integration model based on information and communications technologies for the automation of both construction work and its management at the execution phase, specifically focused on the flat roof construction process. Roofs represent the second area where more defects are claimed. The proposed model is based on a Web system, supported by a service oriented architecture, for the integral management of tasks through the Last Planner System methodology, but incorporating the management of task restrictions from the physical environment variables by designing specific sensing systems. Likewise, all workers are integrated into the management process by Internet-of-Things solutions that guide them throughout the execution process in a non-intrusive and transparent way. PMID:28737693
Andújar-Montoya, María Dolores; Marcos-Jorquera, Diego; García-Botella, Francisco Manuel; Gilart-Iglesias, Virgilio
2017-07-22
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of the main causes of such anomalies, which end in building defects, is the lack of control over the physical variables of the work environment during the execution of tasks. Therefore, the high percentage of defects detected in buildings that have the root cause in the construction phase could be avoidable with a more accurate and efficient control of the process. The present work proposes a novel integration model based on information and communications technologies for the automation of both construction work and its management at the execution phase, specifically focused on the flat roof construction process. Roofs represent the second area where more defects are claimed. The proposed model is based on a Web system, supported by a service oriented architecture, for the integral management of tasks through the Last Planner System methodology, but incorporating the management of task restrictions from the physical environment variables by designing specific sensing systems. Likewise, all workers are integrated into the management process by Internet-of-Things solutions that guide them throughout the execution process in a non-intrusive and transparent way.
Experimental magic state distillation for fault-tolerant quantum computing.
Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond
2011-01-25
Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
Solar Tracking Error Analysis of Fresnel Reflector
Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie
2014-01-01
Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664
Orbital-free bond breaking via machine learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Blooston, Leo; Müller, Klaus-Robert; Burke, Kieron
2013-12-01
Using a one-dimensional model, we explore the ability of machine learning to approximate the non-interacting kinetic energy density functional of diatomics. This nonlinear interpolation between Kohn-Sham reference calculations can (i) accurately dissociate a diatomic, (ii) be systematically improved with increased reference data and (iii) generate accurate self-consistent densities via a projection method that avoids directions with no data. With relatively few densities, the error due to the interpolation is smaller than typical errors in standard exchange-correlation functionals.
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Active listening: The key of successful communication in hospital managers
Jahromi, Vahid Kohpeima; Tabatabaee, Seyed Saeed; Abdar, Zahra Esmaeili; Rajabi, Mahboobeh
2016-01-01
Introduction One of the important causes of medical errors and unintentional harm to patients is ineffective communication. The important part of this skill, in case it has been forgotten, is listening. The objective of this study was to determine whether managers in hospitals listen actively. Methods This study was conducted between May and June 2014 among three levels of managers at teaching hospitals in Kerman, Iran. Active Listening skill among hospital managers was measured by self-made Active Listening Skill Scale (ALSS), which consists of the key elements of active listening and has five subscales, i.e., Avoiding Interruption, Maintaining Interest, Postponing Evaluation, Organizing Information, and Showing Interest. The data were analyzed by IBM-SPSS software, version 20, and the Pearson product-moment correlation coefficient, the chi-squared test, and multiple linear regressions. Results The mean score of active listening in hospital managers was 2.32 out of 3.The highest score (2.27) was obtained by the first-level managers, and the top managers got the lowest score (2.16). Hospital mangers were best in showing interest and worst in avoiding interruptions. The area of employment was a significant predictor of avoiding interruption and the managers’ gender was a strong predictor of skill in maintaining interest (p < 0.05). The type of management and education can predict postponing evaluation, and the length of employment can predict showing interest (p < 0.05). Conclusion There is a necessity for the development of strategies to create more awareness among the hospital managers concerning their active listening skills. PMID:27123221
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Hessian matrix approach for determining error field sensitivity to coil deviations.
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...
2018-03-15
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Feature Migration in Time: Reflection of Selective Attention on Speech Errors
Nozari, Nazbanou; Dell, Gary S.
2012-01-01
This paper describes an initial study of the effect of focused attention on phonological speech errors. In three experiments, participants recited four-word tongue-twisters, and focused attention on one (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to either avoid errors on the attended word, to stress it, or to say it silently. The experiments showed that all methods of attending to a word decreased errors on that word, while increasing errors on the surrounding words. However, this error increase did not result from a relative increase in phonemic migrations originating from the attended word. This pattern is inconsistent with conceptualizing attention either as higher activation of the attended word or greater inhibition of the unattended words throughout the production of the sequence. Instead, it is consistent with a model which presumes that attention exerts its effect at the time of production of the attended word, without lingering effects on the past or the future. PMID:22268910
Hessian matrix approach for determining error field sensitivity to coil deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
The Comparative Study of Metacognition: Sharper Paradigms, Safer Inferences
Smith, J. David; Beran, Michael J.; Couchman, Justin J.; Coutinho, Mariana V. C.
2015-01-01
Results that point to animals’ metacognitive capacity bear a heavy burden given the potential for competing behavioral descriptions. This article uses formal models to evaluate the force of these descriptions. One example is that many existing studies have directly rewarded so-called “uncertainty” responses. Modeling confirms that this practice is an interpretative danger because it supports associative processes and encourages simpler interpretations. Another example is that existing studies raise the concern that animals avoid difficult stimuli not because of uncertainty monitored but because of aversion given error-causing or reinforcement-lean stimuli. Modeling also justifies this concern and shows that this problem is not addressed by the common practice of comparing performance on Chosen and Forced trials. The models and related discussion have utility for metacognition researchers and theorists broadly because they specify the experimental operations that will best indicate a metacognitive capacity in humans or animals by eliminating alternative behavioral accounts. PMID:18792496
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
Xuefei, D; Qin, H; Xiaodi, G; Zhen, G; Wei, L; Xuexia, H; Jiazhen, G; Xiuping, F; Meimei, T; Jingshan, Z; Yunru, L; Xiaoling, F; Kanglin, W; Xingwang, L
2013-11-01
Lyme disease and rickettsioses are two common diseases in China. However, the concomitant occurrence of both diseases in a single individual has been reported infrequently in literature. We reported three related female patients admitted at Beijing Ditan Hospital from October to December 2010. They had similar epidemiological histories. At the beginning, they only got a single diagnosis, respectively, but after specific screenings, the final diagnoses were made. Because arthropods can harbour more than one disease-causing agent, patients can be infected with more than one pathogen at the same time, so the possibility of co-infection could be higher than what was thought previously. These observations suggested that clinicians should enhance the complete screening of arthropod-related infectious diseases so as to make an accurate diagnosis and to avoid diagnostic errors. © 2012 Blackwell Verlag GmbH.
Kumar, Sameer; Aldrich, Krista
2010-12-01
An EMR system implementation would significantly reduce clinician workload and medical errors while saving the US healthcare system major expense. Yet, compared to other developed nations, the US lags behind. This article examines EMR system efforts, benefits, and barriers, as well as steps needed to move the US closer to a nationwide EMR system. The analysis includes a blueprint for implementation of EMR, industry comparisons to highlight the differences between successful and non-successful EMR ventures, references to costs and benefit information, and identification of root causes. 'Poka-yokes' (avoid (yokeru) mistakes (poka)) will be inserted to provide insight into how to systematically overcome challenges. Implementation will require upfront costs including patient privacy that must be addressed early in the development process. Government structure, incentives and mandates are required for nationwide EMR system in the US.
Sang, Hongqiang; Yang, Chenghao; Liu, Fen; Yun, Jintian; Jin, Guoguang; Chen, Fa
2016-12-01
Hand physiological tremor of surgeons can cause vibration at the surgical instrument tip, which may make it difficult for the surgeon to perform fine manipulations of tissue, needles, and sutures. A zero phase adaptive fuzzy Kalman filter (ZPAFKF) is proposed to suppress hand tremor and vibration of a robotic surgical system. The involuntary motion can be reduced by adding a compensating signal that has the same magnitude and frequency but opposite phase with the tremor signal. Simulations and experiments using different filters were performed. Results show that the proposed filter can avoid the loss of useful motion information and time delay, and better suppress minor and varying tremor. The ZPAFKF can provide less error, preferred accuracy, better tremor estimation, and more desirable compensation performance, to suppress hand tremor and decrease vibration at the surgical instrument tip. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A fast determination method for transverse relaxation of spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-04-15
We propose a fast and accurate determination method for transverse relaxation of the spin-exchange-relaxation-free (SERF) magnetometer. This method is based on the measurement of magnetic resonance linewidth via a chirped magnetic field excitation and the amplitude spectrum analysis. Compared with the frequency sweeping via separate sinusoidal excitation, our method can realize linewidth determination within only few seconds and meanwhile obtain good frequency resolution. Therefore, it can avoid the drift error in long term measurement and improve the accuracy of the determination. As the magnetic resonance frequency of the SERF magnetometer is very low, we include the effect of the negativemore » resonance frequency caused by the chirp and achieve the coefficient of determination of the fitting results better than 0.998 with 95% confidence bounds to the theoretical equation. The experimental results are in good agreement with our theoretical analysis.« less
NASA Astrophysics Data System (ADS)
Wang, Hongxiang; Wang, Qi; Bai, Lin; Ji, Yuefeng
2018-01-01
A scheme is proposed to realize the all-optical phase regeneration of four-channel quadrature phase shift keying (QPSK) signal based on phase-sensitive amplification. By utilizing conjugate pump and common pump in a highly nonlinear optical fiber, degenerate four-wave mixing process is observed, and QPSK signals are regenerated. The number of waves is reduced to decrease the cross talk caused by undesired nonlinear interaction during the coherent superposition process. In addition, to avoid the effect of overlapping frequency, frequency spans between pumps and signals are set to be nonintegral multiples. Optical signal-to-noise ratio improvement is validated by bit error rate measurements. Compared with single-channel regeneration, multichannel regeneration brings 0.4-dB OSNR penalty when the value of BER is 10-3, which shows the cross talk in regeneration process is negligible.
Photoproduction of dileptons and photons in p -p collisions at the Large Hadron Collider energies
NASA Astrophysics Data System (ADS)
Ma, Zhi-Lei; Zhu, Jia-Qing
2018-03-01
The production of large pT dileptons and photons originating from photoproduction processes in p-p collisions at Large Hadron Collider energies is calculated. The comparisons between the exact treatment results and the ones of the equivalent photon approximation approach are expressed as the Q2 (the virtuality of photon) and pT distributions. The method developed by Martin and Ryskin is used for avoiding double counting when the coherent and incoherent contributions are considered simultaneously. The numerical results indicate that the equivalent photon approximation is only effective in small Q2 region and can be used for coherent photoproduction processes with proper choice of Qmax2 (the choices Qmax2˜s ^ or ∞ will cause obvious errors), but cannot be used for incoherent photoproduction processes. The exact treatment is needed to deal accurately with the photoproduction of large pT dileptons and photons.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Output Error Analysis of Planar 2-DOF Five-bar Mechanism
NASA Astrophysics Data System (ADS)
Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang
2018-03-01
Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M
2013-11-01
Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality), patient factors (availability, acuity), staff health status (fatigue, stress) and interruptions/distractions during drug administration. Few studies sought to determine the causes of intravenous MAEs. A number of latent pathway conditions were less well explored, including local working culture and high-level managerial decisions. Causes were often described superficially; this may be related to the use of quantitative surveys and observation methods in many studies, limited use of established error causation frameworks to analyse data and a predominant focus on issues other than the causes of MAEs among studies. As only English language publications were included, some relevant studies may have been missed. Limited evidence from studies included in this systematic review suggests that MAEs are influenced by multiple systems factors, but if and how these arise and interconnect to lead to errors remains to be fully determined. Further research with a theoretical focus is needed to investigate the MAE causation pathway, with an emphasis on ensuring interventions designed to minimise MAEs target recognised underlying causes of errors to maximise their impact.
Knowledge of healthcare professionals about medication errors in hospitals
Abdel-Latif, Mohamed M. M.
2016-01-01
Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261
Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin
2016-12-01
This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.
46 CFR 520.14 - Special permission.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Commission, in its discretion and for good cause shown, to permit increases or decreases in rates... its discretion and for good cause shown, permit departures from the requirements of this part. (b) Clerical errors. Typographical and/or clerical errors constitute good cause for the exercise of special...
46 CFR 520.14 - Special permission.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Commission, in its discretion and for good cause shown, to permit increases or decreases in rates... its discretion and for good cause shown, permit departures from the requirements of this part. (b) Clerical errors. Typographical and/or clerical errors constitute good cause for the exercise of special...
Probing the cosmic causes of errors in supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.
Toffolo, Gianna; Cobelli, Claudio
2016-01-01
Fasting hyperglycemia occurs when an excessive rate of endogenous glucose production (EGP) is not accompanied by an adequate compensatory increase in the rate of glucose disappearance (Rd). The situation following food ingestion is more complex as the amount of glucose that reaches the circulation for disposal is a function of the systemic rate of appearance of the ingested glucose (referred to as the rate of meal appearance [Rameal]), the pattern and degree of suppression of EGP, and the rapidity of stimulation of the Rd. In an effort to measure these processes, Steele et al. proposed what has come to be referred to as the dual-tracer method in which the ingested glucose is labeled with one tracer while a second tracer is infused intravenously at a constant rate. Unfortunately, subsequent studies have shown that although this approach is technically simple, the marked changes in plasma specific activity or the tracer-to-tracee ratio, if stable tracers are used, introduce a substantial error in the calculation of Rameal, EGP, and Rd, thereby leading to incorrect and at times misleading results. This Perspective discusses the causes of these so-called “nonsteady-state” errors and how they can be avoided by the use of the triple-tracer approach. PMID:27208180
Estimating surface reflectance from Himawari-8/AHI reflectance channels Using 6SV
NASA Astrophysics Data System (ADS)
Lee, Kyeong-sang; Choi, Sungwon; Seo, Minji; Seong, Noh-hun; Han, Kyung-soo
2017-04-01
TOA (Top Of Atmospheric) reflectance observed by satellite is modified by the influence of atmosphere such as absorbing and scattering by molecular and gasses. Removing TOA reflectance attenuation which is caused by the atmospheric is essential. surface reflectance with compensated atmospheric effects used as important input data for land product such as Normalized Difference Vegetation Index (NDVI), Land Surface Albedo (LSA) and etc. In this study, we Second Simulation of a Satellite Signal in the Solar Spectrum Vector (6SV) Radiative Transfer Model (RTM) for atmospheric correction and estimating surface reflectance from Himawari-8/Advanced Himawari Imager (AHI) reflectance channels. 6SV has the advantage that it has high accuracy by performing the atmospheric correction by dividing the width of the satellite channel by 2.5 nm, but it is slow to use in the operation. So, we use LUT approach to reduce the computation time and avoid the intensive calculation required for retrieving surface reflectance. Estimated surface reflectance data were compared with PROBA-V S1 data to evaluate the accuracy. As a result Root Mean Square Error (RMSE) and bias were about 0.05 and -0.02. It is considered that this error is due to the difference of angle component and Spectral Response Function (SRF) of each channel.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Determination of the carmine content based on spectrum fluorescence spectral and PSO-SVM
NASA Astrophysics Data System (ADS)
Wang, Shu-tao; Peng, Tao; Cheng, Qi; Wang, Gui-chuan; Kong, De-ming; Wang, Yu-tian
2018-03-01
Carmine is a widely used food pigment in various food and beverage additives. Excessive consumption of synthetic pigment shall do harm to body seriously. The food is generally associated with a variety of colors. Under the simulation context of various food pigments' coexistence, we adopted the technology of fluorescence spectroscopy, together with the PSO-SVM algorithm, so that to establish a method for the determination of carmine content in mixed solution. After analyzing the prediction results of PSO-SVM, we collected a bunch of data: the carmine average recovery rate was 100.84%, the root mean square error of prediction (RMSEP) for 1.03e-04, 0.999 for the correlation coefficient between the model output and the real value of the forecast. Compared with the prediction results of reverse transmission, the correlation coefficient of PSO-SVM was 2.7% higher, the average recovery rate for 0.6%, and the root mean square error was nearly one order of magnitude lower. According to the analysis results, it can effectively avoid the interference caused by pigment with the combination of the fluorescence spectrum technique and PSO-SVM, accurately determining the content of carmine in mixed solution with an effect better than that of BP.
Quality based approach for adaptive face recognition
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
2009-05-01
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
Application of failure mode and effect analysis in an assisted reproduction technology laboratory.
Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola
2016-08-01
Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Wilharm, A.; Hurschler, Ch.; Dermitas, T.; Bohnsack, M.
2013-01-01
Pressure-sensitive K-Scan 4000 sensors (Tekscan, USA) provide new possibilities for the dynamic measurement of force and pressure in biomechanical investigations. We examined the sensors to determine in particular whether they are also suitable for reliable measurements of retropatellar forces and pressures. Insertion approaches were also investigated and a lateral parapatellar arthrotomy supplemented by parapatellar sutures proved to be the most reliable method. The ten human cadaver knees were tested in a knee-simulating machine at a torque of 30 and 40 Nm. Each test cycle involved a dynamic extension from 120° flexion. All recorded parameters showed a decrease of 1-2% per measurement cycle. Although we supplemented the sensors with a Teflon film, the decrease, which was likely caused by shear force, was significant. We evaluated 12 cycles and observed a linear decrease in parameters up to 17.2% (coefficient of regression 0.69–0.99). In our opinion, the linear decrease can be considered a systematic error and can therefore be quantified and accounted for in subsequent experiments. That will ensure reliable retropatellar usage of Tekscan sensors and distinguish the effects of knee joint surgeries from sensor wear-related effects. PMID:24369018
Contagious error sources would need time travel to prevent quantum computation
NASA Astrophysics Data System (ADS)
Kalai, Gil; Kuperberg, Greg
2015-08-01
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
Bias in the Counseling Process: How to Recognize and Avoid It.
ERIC Educational Resources Information Center
Morrow, Kelly A.; Deidan, Cecilia T.
1992-01-01
Notes that counselors' vulnerability to inferential bias during counseling process may result in misdiagnosis and improper interventions. Discusses these inferential biases: availability and representativeness heuristics; fundamental attribution error; anchoring, prior knowledge, and labeling; confirmatory hypothesis testing; and reconstructive…
The Treatment of Capital Costs in Educational Projects
ERIC Educational Resources Information Center
Bezeau, Lawrence
1975-01-01
Failure to account for the cost and depreciation of capital leads to suboptimal investments in education, specifically to excessively capital intensive instructional technologies. This type of error, which is particularly serious when planning for developing countries, can be easily avoided. (Author)
Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben
2011-03-01
Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.
Kedir, Jafer; Girma, Abonesh
2014-10-01
Refractive error is one of the major causes of blindness and visual impairment in children; but community based studies are scarce especially in rural parts of Ethiopia. So, this study aims to assess the prevalence of refractive error and its magnitude as a cause of visual impairment among school-age children of rural community. This community-based cross-sectional descriptive study was conducted from March 1 to April 30, 2009 in rural villages of Goro district of Gurage Zone, found south west of Addis Ababa, the capital of Ethiopia. A multistage cluster sampling method was used with simple random selection of representative villages in the district. Chi-Square and t-tests were used in the data analysis. A total of 570 school-age children (age 7-15) were evaluated, 54% boys and 46% girls. The prevalence of refractive error was 3.5% (myopia 2.6% and hyperopia 0.9%). Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing corrective spectacles during the study period. Refractive error was the commonest cause of visual impairment in children of the district, but no measures were taken to reduce the burden in the community. So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised to provide low cost corrective spectacles in the rural community.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
Kiymaz, Dilek; Koç, Zeliha
2018-03-01
To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.
Robot learning and error correction
NASA Technical Reports Server (NTRS)
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
Voorhees, A Scott; Wang, Jiandong; Wang, Cuicui; Zhao, Bin; Wang, Shuxiao; Kan, Haidong
2014-07-01
In recent years, levels of particulate matter (PM) air pollution in China have been relatively high, exceeding China's Class II standards in many cities and impacting public health. This analysis takes Chinese health impact functions and underlying health incidence, applies 2010-2012 modeled and monitored PM air quality data, and estimates avoided cases of mortality and morbidity in Shanghai, assuming achievement of China's Class II air quality standards. In Shanghai, the estimated avoided all cause mortality due to PM10 ranged from 13 to 55 cases per day and from 300 to 800 cases per year. The estimated avoided impact on hospital admissions due to PM10 ranged from 230 cases to 580 cases per day and from 5400 to 7900 per year. The estimated avoided impact on all cause mortality due to PM2.5 ranged from 6 to 26 cases per day and from 39 to 1400 per year. The estimated impact on all cause mortality of a year exposure to an annual or monthly mean PM2.5 concentration ranged from 180 to 3500 per year. In Shanghai, the avoided cases of all cause mortality had an estimated monetary value ranging from 170 million yuan (1 US dollar=4.2 yuan Purchasing Power Parity) to 1200 million yuan. Avoided hospital admissions had an estimated value from 20 to 43 million yuan. Avoided emergency department visits had an estimated value from 5.6 million to 15 million yuan. Avoided outpatient visits had an estimated value from 21 million to 31 million yuan. In this analysis, available data were adequate to estimate avoided health impacts and assign monetary value. Sufficient supporting documentation was available to construct and format data sets for use in the United States Environmental Protection Agency's health and environmental assessment model, known as the Environmental Benefits Mapping and Analysis Program - Community Edition ("BenMAP-CE"). Published by Elsevier B.V.
Global magnitude of visual impairment caused by uncorrected refractive errors in 2004
Pascolini, Donatella; Mariotti, Silvio P; Pokharel, Gopal P
2008-01-01
Abstract Estimates of the prevalence of visual impairment caused by uncorrected refractive errors in 2004 have been determined at regional and global levels for people aged 5 years and over from recent published and unpublished surveys. The estimates were based on the prevalence of visual acuity of less than 6/18 in the better eye with the currently available refractive correction that could be improved to equal to or better than 6/18 by refraction or pinhole. A total of 153 million people (range of uncertainty: 123 million to 184 million) are estimated to be visually impaired from uncorrected refractive errors, of whom eight million are blind. This cause of visual impairment has been overlooked in previous estimates that were based on best-corrected vision. Combined with the 161 million people visually impaired estimated in 2002 according to best-corrected vision, 314 million people are visually impaired from all causes: uncorrected refractive errors become the main cause of low vision and the second cause of blindness. Uncorrected refractive errors can hamper performance at school, reduce employability and productivity, and generally impair quality of life. Yet the correction of refractive errors with appropriate spectacles is among the most cost-effective interventions in eye health care. The results presented in this paper help to unearth a formerly hidden problem of public health dimensions and promote policy development and implementation, programmatic decision-making and corrective interventions, as well as stimulate research. PMID:18235892
Avoidance of APOBEC3B-induced mutation by error-free lesion bypass
Hoopes, James I.; Hughes, Amber L.; Hobson, Lauren A.; Cortez, Luis M.; Brown, Alexander J.
2017-01-01
Abstract APOBEC cytidine deaminases mutate cancer genomes by converting cytidines into uridines within ssDNA during replication. Although uracil DNA glycosylases limit APOBEC-induced mutation, it is unknown if subsequent base excision repair (BER) steps function on replication-associated ssDNA. Hence, we measured APOBEC3B-induced CAN1 mutation frequencies in yeast deficient in BER endonucleases or DNA damage tolerance proteins. Strains lacking Apn1, Apn2, Ntg1, Ntg2 or Rev3 displayed wild-type frequencies of APOBEC3B-induced canavanine resistance (CanR). However, strains without error-free lesion bypass proteins Ubc13, Mms2 and Mph1 displayed respective 4.9-, 2.8- and 7.8-fold higher frequency of APOBEC3B-induced CanR. These results indicate that mutations resulting from APOBEC activity are avoided by deoxyuridine conversion to abasic sites ahead of nascent lagging strand DNA synthesis and subsequent bypass by error-free template switching. We found this mechanism also functions during telomere re-synthesis, but with a diminished requirement for Ubc13. Interestingly, reduction of G to C substitutions in Ubc13-deficient strains uncovered a previously unknown role of Ubc13 in controlling the activity of the translesion synthesis polymerase, Rev1. Our results highlight a novel mechanism for error-free bypass of deoxyuridines generated within ssDNA and suggest that the APOBEC mutation signature observed in cancer genomes may under-represent the genomic damage these enzymes induce. PMID:28334887
[Safety management in pathology laboratory: from specimen handling to confirmation of reports].
Minato, Hiroshi; Nojima, Takayuki; Nakano, Mariko; Yamazaki, Michiko
2011-03-01
Medical errors in pathological diagnosis give a huge amount of physical and psychological damage to patients as well as medical staffs. We discussed here how to avoid medical errors in surgical pathology laboratory through our experience. Handling of surgical specimens and diagnosing process requires intensive labor and involves many steps. Each hospital reports many kinds of accidents or incidents, however, many laboratories share common problems and each process has its specific risk for the certain error. We analyzed the problems in each process and concentrated on avoiding misaccessioning, mislabeling, and misreporting. We have made several changes in our system, such as barcode labels, digital images of all specimens, putting specimens in embedding cassettes directly on the endoscopic biopsied specimens, and using a multitissue control block as controls in immunohistochemistry. Some problems are still left behind, but we have reduced the errors by decreasing the number of artificial operation as much as possible. A pathological system recognizing the status of read or unread the pathological reports by clinician are now underconstruction. We also discussed about quality assurance of diagnosis, cooperation with clinicians and other comedical staffs, and organization and method. In order to operate riskless work, it is important for all the medical staffs to have common awareness of the problems, keeping careful observations, and sharing all the information in common. Incorporation of an organizational management tool such as ISO 15189 and utilizing PDCA cycle is also helpful for safety management and quality improvement of the laboratory.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
[Adverse events in general surgery. A prospective analysis of 13,950 consecutive patients].
Rebasa, Pere; Mora, Laura; Vallverdú, Helena; Luna, Alexis; Montmany, Sandra; Romaguera, Andreu; Navarro, Salvador
2011-11-01
Adverse event (AE) rates in General Surgery vary, according to different authors and recording methods, between 2% and 30%. Six years ago we designed a prospective AE recording system to change patient safety culture in our Department. We present the results of this work after a 6 year follow-up. The AE, sequelae and health care errors in a University Hospital surgery department were recorded. An analysis of each incident recorded was performed by a reviewer. The data was entered into data base for rapid access and consultation. The results were routinely presented in Departmental morbidity-mortality sessions. A total of 13,950 patients had suffered 11,254 AE, which affected 5142 of them (36.9% of admissions). A total of 920 patients were subjected to at least one health care error (6.6% of admissions). This meant that 6.6% of our patients suffered an avoidable AE. The overall mortality at 5 years in our department was 2.72% (380 deaths). An adverse event was implicated in the death of the patient in 180 cases (1.29% of admissions). In 49 cases (0.35% of admissions), mortality could be attributed to an avoidable AE. After 6 years there tends to be an increasingly lower incidence of errors. The exhaustive and prospective recording of AE leads to changes in patient safety culture in a Surgery Department and helps decrease the incidence of health care errors. Copyright © 2011 AEC. Published by Elsevier Espana. All rights reserved.
Heard, Gaylene C; Thomas, Rowan D; Sanderson, Penelope M
2016-05-01
Although most anesthesiologists will have 1 catastrophic perioperative event or more during their careers, there has been little research on their attitudes to assistive strategies after the event. There are wide-ranging emotional consequences for anesthesiologists involved in an unexpected intraoperative patient death, particularly if the anesthesiologist made an error. We used a between-groups survey study design to ask whether there are different attitudes to assistive strategies when a hypothetical patient death is caused by a drug error versus not caused by an error. First, we explored attitudes to generalized supportive strategies. Second, we examined our hypothesis that the presence of an error causing the hypothetical patient death would increase the perceived social stigma and self-stigma of help-seeking. Finally, we examined the strategies to assist help-seeking. An anonymous, mailed, self-administered survey was conducted with 1600 consultant anesthesiologists in Australia on the mailing list of the Australian and New Zealand College of Anaesthetists. The participants were randomized into "error" versus "no-error" groups for the hypothetical scenario of patient death due to anaphylaxis. Nonparametric, descriptive, parametric, and inferential tests were used for data analysis. P' is used where P values were corrected for multiple comparisons. There was a usable response rate of 48.9%. When an error had caused the hypothetical patient death, participants were more likely to agree with 4 of the 5 statements about support, including need for time off (P' = 0.003), counseling (P' < 0.001), a formal strategy for assistance (P' < 0.001), and the anesthesiologist not performing further cases that day (P' = 0.047). There were no differences between groups in perceived self-stigma (P = 0.98) or social stigma (P = 0.15) of seeking counseling, whether or not an error had caused the hypothetical patient death. Finally, when an error had caused the patient death, participants were more likely to agree with 2 of the 5 statements about help-seeking, including the need for a formal, hospital-based process that provides information on where to obtain professional counseling (P' = 0.006) and the availability of after-hours counseling services (P' = 0.035). Our participants were more likely to agree with assistive strategies such as not performing further work that day, time off, counseling, formal support strategies, and availability of after-hours counseling services, when the hypothetical patient death from anaphylaxis was due to an error. The perceived stigma toward attending counseling was not affected by the presence or absence of an error as the cause of the patient death, disproving our hypothesis.
Medical students' experiences with medical errors: an analysis of medical student essays.
Martinez, William; Lo, Bernard
2008-07-01
This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.
UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers
NASA Technical Reports Server (NTRS)
Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2017-01-01
This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.
A knowledge-based system design/information tool for aircraft flight control systems
NASA Technical Reports Server (NTRS)
Mackall, Dale A.; Allen, James G.
1989-01-01
Research aircraft have become increasingly dependent on advanced control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objectives. This integration is being accomplished through electronic control systems. Because of the number of systems involved and the variety of engineering disciplines, systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control system is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary objective is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences of three highly complex, integrated aircraft programs are reviewed: the X-29 forward-swept wing, the advanced fighter technology integration (AFTI) F-16, and the highly maneuverable aircraft technology (HiMAT) program. Significant operating anomalies and the design errors which cause them, are examined to help identify what functions a system design/information tool should provide to assist designers in avoiding errors.
da Cunha, Antonio Ribeiro
2015-05-01
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging
NASA Astrophysics Data System (ADS)
Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.
2003-05-01
Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.
Replacing Fortran Namelists with JSON
NASA Astrophysics Data System (ADS)
Robinson, T. E., Jr.
2017-12-01
Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Errors in veterinary practice: preliminary lessons for building better veterinary teams.
Kinnison, T; Guile, D; May, S A
2015-11-14
Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.
Minimally invasive surgical technique for tethered surgical drains
Hess, Shane R; Satpathy, Jibanananda; Waligora, Andrew C; Ugwu-Oju, Obinna
2017-01-01
A feared complication of temporary surgical drain placement is from the technical error of accidentally suturing the surgical drain into the wound. Postoperative discovery of a tethered drain can frequently necessitate return to the operating room if it cannot be successfully removed with nonoperative techniques. Formal wound exploration increases anesthesia and infection risk as well as cost and is best avoided if possible. We present a minimally invasive surgical technique that can avoid the morbidity associated with a full surgical wound exploration to remove a tethered drain when other nonoperative techniques fail. PMID:28400669
Nishiura, K
1998-08-01
With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.
NASA Technical Reports Server (NTRS)
Remer, L. A.; Wald, A. E.; Kaufman, Y. J.
1999-01-01
We obtain valuable information on the angular and seasonal variability of surface reflectance using a hand-held spectrometer from a light aircraft. The data is used to test a procedure that allows us to estimate visible surface reflectance from the longer wavelength 2.1 micrometer channel (mid-IR). Estimating or avoiding surface reflectance in the visible is a vital first step in most algorithms that retrieve aerosol optical thickness over land targets. The data indicate that specular reflection found when viewing targets from the forward direction can severely corrupt the relationships between the visible and 2.1 micrometer reflectance that were derived from nadir data. There is a month by month variation in the ratios between the visible and the mid-IR, weakly correlated to the Normalized Difference Vegetation Index (NDVI). If specular reflection is not avoided, the errors resulting from estimating surface reflectance from the mid-IR exceed the acceptable limit of DELTA-rho approximately 0.01 in roughly 40% of the cases, using the current algorithm. This is reduced to 25% of the cases if specular reflection is avoided. An alternative method that uses path radiance rather than explicitly estimating visible surface reflectance results in similar errors. The two methods have different strengths and weaknesses that require further study.
Hu, Jianping; Lee, Dianne; Hu, Sien; Zhang, Sheng; Chao, Herta; Li, Chiang-Shan R
2016-06-01
Personality traits contribute to variation in human behavior, including the propensity to take risk. Extant work targeted risk-taking processes with an explicit manipulation of reward, but it remains unclear whether personality traits influence simple decisions such as speeded versus delayed responses during cognitive control. We explored this issue in an fMRI study of the stop signal task, in which participants varied in response time trial by trial, speeding up and risking a stop error or slowing down to avoid errors. Regional brain activations to speeded versus delayed motor responses (risk-taking) were correlated to novelty seeking (NS), harm avoidance (HA) and reward dependence (RD), with age and gender as covariates, in a whole brain regression. At a corrected threshold, the results showed a positive correlation between NS and risk-taking responses in the dorsomedial prefrontal, bilateral orbitofrontal, and frontopolar cortex, and between HA and risk-taking responses in the parahippocampal gyrus and putamen. No regional activations varied with RD. These findings demonstrate that personality traits influence the neural processes of executive control beyond behavioral tasks that involve explicit monetary reward. The results also speak broadly to the importance of characterizing inter-subject variation in studies of cognition and brain functions.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Analyzing Software Errors in Safety-Critical Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1994-01-01
This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
NASA Astrophysics Data System (ADS)
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
Development of an Ultra-Wideband Circularly Polarized Multiple Layer Dielectric Rod Antenna Design
NASA Astrophysics Data System (ADS)
Wainwright, Gregory D.
This dissertations focuses on the development of a novel Ultra-Wideband (UWB) circularly polarized dielectric rod antenna (CPDRA) which yields a constant gain, pattern, and phase center. These properties are important in many applications. Within radar systems a constant phase center is desirable to avoid errors within downrange and crossrange measurements. In a reflector antenna the illumination, spillover, and phase efficiencies will remain the same over an ultra-wideband. Lastly, near field probes require smooth amplitude and phase patterns over frequency to avoid errors during the calibration process of the antenna under test. In this dissertation a novel CP feeding network has been developed for an ultra-wideband dielectric rod antenna. Circularly-polarized antennas have a major advantage over its linearly-polarized counterpart in that the polarization mismatch loss caused by misalignment between the polarizations of the incident fields and antenna can be avoided. This is important in satellite communications and broadcasts where signal propagation through the ionosphere can experience Faraday Rotation. A circularly polarized antenna is also helpful in mobile radar and communication systems where the receiving antennas orientation is not fixed. Previous research on UWB dielectric rod antenna designs has focused on Dual linear feeds. Each polarization within the dual linear feed is excited by a pair of linear launcher arms fed with a 0°-180° hybrid balun. The proposed CPDRA design does not require the 0°-180° hybrid baluns or 0°-90° hybrid for achieving CP operation. These hybrids will increase the antennas size, weight, cost, and reduce operational bandwidth. A design technique has been developed for an UWB multilayer dielectric waveguide used in a CPDRA antenna. This design technique uses near-field Electric field data from inside the waveguide, in conjunction with a genetic algorithm optimization to yield a wideband waveguide with a near field amplitude distribution that scales with frequency. A multilayered dielectric waveguide presents many fabrication challenges. The thermal expansion rates, moisture absorption rates, and vibration properties differ within the various dielectric materials used. Therefore, the development of a wideband waveguide using one material with a low dielectric constant would be advantages since 3-D printing technology can be utilized. In this dissertation novel TE01 and TM01 mode suppressors have been developed using only a single dielectric material.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
Retrospective analysis of refractive errors in children with vision impairment.
Du, Jojo W; Schmid, Katrina L; Bevan, Jennifer D; Frater, Karen M; Ollett, Rhondelle; Hein, Bronwyn
2005-09-01
Emmetropization is the reduction in neonatal refractive errors that occurs after birth. Ocular disease may affect this process. We aimed to determine the relative frequency of ocular conditions causing vision impairment in the pediatric population and characterize the refractive anomalies present. We also compared the causes of vision impairment in children today to those between 1974 and 1981. Causes of vision impairment and refractive data of 872 children attending a pediatric low-vision clinic from 1985 to 2002 were retrospectively collated. As a result of associated impairments, refractive data were not available for 59 children. An analysis was made of the causes of vision impairment, the distribution of refractive errors in children with vision impairment, and the average type of refractive error for the most commonly seen conditions. We found that cortical or cerebral vision impairment (CVI) was the most common condition causing vision impairment, accounting for 27.6% of cases. This was followed by albinism (10.6%), retinopathy of prematurity (ROP; 7.0%), optic atrophy (6.2%), and optic nerve hypoplasia (5.3%). Vision impairment was associated with ametropia; fewer than 25% of the children had refractive errors < or = +/-1 D. The refractive error frequency plots (for 0 to 2-, 6 to 8-, and 12 to 14-year age bands) had a Gaussian distribution indicating that the emmetropization process was abnormal. The mean spherical equivalent refractive error of the children (n = 813) was +0.78 +/- 6.00 D with 0.94 +/- 1.24 D of astigmatism and 0.92 +/- 2.15 D of anisometropia. Most conditions causing vision impairment such as albinism were associated with low amounts of hyperopia. Moderate myopia was observed in children with ROP. The relative frequency of ocular conditions causing vision impairment in children has changed since the 1970s. Children with vision impairment often have an associated ametropia suggesting that the emmetropization system is also impaired.
Zhang, Jia-Shu; Qu, Ling; Wang, Qun; Jin, Wei; Hou, Yuan-Zheng; Sun, Guo-Chen; Li, Fang-Ye; Yu, Xin-Guang; Xu, Ban-Nan; Chen, Xiao-Lei
2017-12-20
For stereotactic brain biopsy involving motor eloquent regions, the surgical objective is to enhance diagnostic yield and preserve neurological function. To achieve this aim, we implemented functional neuro-navigation and intraoperative magnetic resonance imaging (iMRI) into the biopsy procedure. The impact of this integrated technique on the surgical outcome and postoperative neurological function was investigated and evaluated. Thirty nine patients with lesions involving motor eloquent structures underwent frameless stereotactic biopsy assisted by functional neuro-navigation and iMRI. Intraoperative visualisation was realised by integrating anatomical and functional information into a navigation framework to improve biopsy trajectories and preserve eloquent structures. iMRI was conducted to guarantee the biopsy accuracy and detect intraoperative complications. The perioperative change of motor function and biopsy error before and after iMRI were recorded, and the role of functional information in trajectory selection and the relationship between the distance from sampling site to nearby eloquent structures and the neurological deterioration were further analyzed. Functional neuro-navigation helped modify the original trajectories and sampling sites in 35.90% (16/39) of cases to avoid the damage of eloquent structures. Even though all the lesions were high-risk of causing neurological deficits, no significant difference was found between preoperative and postoperative muscle strength. After data analysis, 3mm was supposed to be the safe distance for avoiding transient neurological deterioration. During surgery, the use of iMRI significantly reduced the biopsy errors (p = 0.042) and potentially increased the diagnostic yield from 84.62% (33/39) to 94.87% (37/39). Moreover, iMRI detected intraoperative haemorrhage in 5.13% (2/39) of patients, all of them benefited from the intraoperative strategies based on iMRI findings. Intraoperative visualisation of functional structures could be a feasible, safe and effective technique. Combined with intraoperative high-field MRI, it contributed to enhance the biopsy accuracy and lower neurological complications in stereotactic brain biopsy involving motor eloquent areas.
NASA Astrophysics Data System (ADS)
Lu, Biao; Luo, Zhicai; Zhong, Bo; Zhou, Hao; Flechtner, Frank; Förste, Christoph; Barthelmes, Franz; Zhou, Rui
2017-11-01
Based on tensor theory, three invariants of the gravitational gradient tensor (IGGT) are independent of the gradiometer reference frame (GRF). Compared to traditional methods for calculation of gravity field models based on the gravity field and steady-state ocean circulation explorer (GOCE) data, which are affected by errors in the attitude indicator, using IGGT and least squares method avoids the problem of inaccurate rotation matrices. The IGGT approach as studied in this paper is a quadratic function of the gravity field model's spherical harmonic coefficients. The linearized observation equations for the least squares method are obtained using a Taylor expansion, and the weighting equation is derived using the law of error propagation. We also investigate the linearization errors using existing gravity field models and find that this error can be ignored since the used a-priori model EIGEN-5C is sufficiently accurate. One problem when using this approach is that it needs all six independent gravitational gradients (GGs), but the components V_{xy} and V_{yz} of GOCE are worse due to the non-sensitive axes of the GOCE gradiometer. Therefore, we use synthetic GGs for both inaccurate gravitational gradient components derived from the a-priori gravity field model EIGEN-5C. Another problem is that the GOCE GGs are measured in a band-limited manner. Therefore, a forward and backward finite impulse response band-pass filter is applied to the data, which can also eliminate filter caused phase change. The spherical cap regularization approach (SCRA) and the Kaula rule are then applied to solve the polar gap problem caused by GOCE's inclination of 96.7° . With the techniques described above, a degree/order 240 gravity field model called IGGT_R1 is computed. Since the synthetic components of V_{xy} and V_{yz} are not band-pass filtered, the signals outside the measurement bandwidth are replaced by the a-priori model EIGEN-5C. Therefore, this model is practically a combined gravity field model which contains GOCE GGs signals and long wavelength signals from the a-priori model EIGEN-5C. Finally, IGGT_R1's accuracy is evaluated by comparison with other gravity field models in terms of difference degree amplitudes, the geostrophic velocity in the Agulhas current area, gravity anomaly differences as well as by comparison to GNSS/leveling data.
NASA Astrophysics Data System (ADS)
Choi, J. H.; Kim, S. W.; Won, J. S.
2017-12-01
The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.
NASA Technical Reports Server (NTRS)
Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.
2015-01-01
A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).
ATC operational error analysis.
DOT National Transportation Integrated Search
1972-01-01
The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...
Quality of death notification forms in North West Bank/Palestine: a descriptive study.
Qaddumi, Jamal A S; Nazzal, Zaher; Yacoup, Allam R S; Mansour, Mahmoud
2017-04-11
The death notification forms (DNFs) are important documents. Thus, inability to fill it properly by physicians will affect the national mortality report and, consequently, the evidence-based decision making. The errors in filling DNFs are common all over the world and are different in types and causes. We aimed to evaluate the quality of DNFs in terms of completeness and types of errors in the cause of death section. A descriptive study was conducted to review 2707 DNFs in North West Bank/Palestine during the year 2012 using data abstraction sheets. SPSS 17.0 was used to show the frequency of major and minor errors committed in filling the DNFs. Surprisingly, only 1% of the examined DNFs had their cause of death section filled completely correct. The immediate cause of death was correctly identified in 5.9% of all DNFs and the underlying cause of death was correctly reported in 55.4% of them. The sequence was incorrect in 41.5% of the DNFs. The most frequently documented minor error was "Not writing Time intervals" error (97.0%). Almost all DNFs contained at least one minor or major error. This high percentage of errors may affect the mortality and morbidity statistics, public health research and the process of providing evidence for health policy. Training workshops on DNF completion for newly recruited employees and at the beginning of the residency program are recommended on a regular basis. As well, we recommend reviewing the national DNFs to simplify it and make it consistent with updated evidence-based guidelines and recommendation.
Porous plug for reducing orifice induced pressure error in airfoils
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)
1988-01-01
A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.
Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)
NASA Technical Reports Server (NTRS)
Choset, Howie; Burdick, Joel
1994-01-01
Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.
[Building questions in forensic medicine and their logical basis].
Kovalev, D; Shmarov, K; Ten'kov, D
2015-01-01
The authors characterize in brief the requirements to the correct formulation of the questions posed to forensic medical experts with special reference to the mistakes made in building the questions and the ways to avoid them. This article actually continues the series of publications of the authors concerned with the major logical errors encountered in expert conclusions. Further publications will be dedicated to the results of the in-depth analysis of the logical errors contained in the questions posed to forensic medical experts and encountered in the expert conclusions.
System safety management: A new discipline
NASA Technical Reports Server (NTRS)
Pope, W. C.
1971-01-01
The systems theory is discussed in relation to safety management. It is suggested that systems safety management, as a new discipline, holds great promise for reducing operating errors, conserving labor resources, avoiding operating costs due to mistakes, and for improving managerial techniques. It is pointed out that managerial failures or system breakdowns are the basic reasons for human errors and condition defects. In this respect, a recommendation is made that safety engineers stop visualizing the problem only with the individual (supervisor or employee) and see the problem from the systems point of view.
Avoiding Misdiagnosis in Patients with Neurological Emergencies
Pope, Jennifer V.; Edlow, Jonathan A.
2012-01-01
Approximately 5% of patients presenting to emergency departments have neurological symptoms. The most common symptoms or diagnoses include headache, dizziness, back pain, weakness, and seizure disorder. Little is known about the actual misdiagnosis of these patients, which can have disastrous consequences for both the patients and the physicians. This paper reviews the existing literature about the misdiagnosis of neurological emergencies and analyzes the reason behind the misdiagnosis by specific presenting complaint. Our goal is to help emergency physicians and other providers reduce diagnostic error, understand how these errors are made, and improve patient care. PMID:22888439
[Mortality by avoidable causes in preschool children].
Lurán, Albenia; López, Elizabeth; Pinilla, Consuelo; Sierra, Pedro
2009-03-01
The infant-mortality rate in children aged less than five is an indicator of the general state of health of a population and directly reflects the quality of life and the level of socio-economic development of a country. Avoidable mortality was assessed in preschool children as a reflection of Colombia quality of life and socio-economic development. Mortality trends were analyzed in preschool children aged less than five throughout Colombia during a 20-year period from 1985-2004, and focused on mortality causes that were considered avoidable. This was a descriptive, retrospective study; the sources of information were Departamento Administrativo Nacional de Estadística records of deaths and population projections 1985-2004. Mortality rate due to avoidable causes was the statistical indicator. In children aged less than one, the reducible mortality due to "early diagnosis and medical treatment" occupied the first place amongst causes for every year of the study period and accounted for more than 50% of recorded deaths. In children aged 1 to 4, the category "other important reducible causes" was associated with 40% of recorded deaths-deaths due mainly to respiratory diseases. Over the 20-year period, the avoidable mortality rate decreased by 34% in children aged less than one, in children 1-4, it decreased by 23%. Although the infant-mortality rate in preschool children was reduced, the decrease was small, from 80% to 77%. The situation requires more analysis with respect to strategies in public health, particularly concerning preventable diseases of the infancy.
NASA Astrophysics Data System (ADS)
Jose, Tony; Narayanan, Vijayakumar
2018-03-01
Radio over fiber (RoF) systems use a large number of base stations (BSs) and a number of central stations (CSs), which are interlinked together to form the network. RoF systems use multiple wavelengths for communication between CSs or between CSs and BSs to facilitate the huge amount of data traffic due to the multiple services for a large number of users. When erbium-doped fiber amplifiers (EDFAs) are used as amplifiers in such wavelength-division multiplexed systems, the nonuniform gain spectrum of EDFAs causes instability to some of the channels while providing faithful amplification to other channels. To avoid this inconsistency, the gain spectrum of the amplifier needs to be uniform along the whole usable range of wavelengths. A gain contouring technique is proposed to provide uniform gain to all channels irrespective of wavelength. Optical add/drop multiplexers (OADMs) and different lengths of erbium-doped fibers are used to create such a gain contouring mechanism in the optical domain itself. The effect of a cascade of nonuniform gain amplifiers is studied, and the proposed system mitigates the adverse effects caused due to nonuniform gain-induced channel instability effectively.
Evaluation the effect of energetic particles in solar flares on satellite's life time
NASA Astrophysics Data System (ADS)
Bagheri, Z.; Davoudifar, P.
2016-09-01
As the satellites have a multiple role in the humans' life, their damages and therefore logical failures of their segment causes problems and lots of expenses. So evaluating different types of failures in their segments has a crustal role. Solar particles are one of the most important reasons of segment damages (hard and soft) during a solar event or in usual times. During a solar event these particle may cause extensive damages which are even permanent (hard errors). To avoid these effects and design shielding mediums, we need to know SEP (solar energetic particles) flux and MTTF (mean time between two failures) of segments. In the present work, we calculated SEP flux witch collide the satellite in common times, in different altitudes. OMERE software was used to determine the coordinates and specifications of a satellite which in simulations has been launched to space. Then we considered a common electronic computer part and calculated MTTF for it. In the same way the SEP fluxes were calculated during different solar flares of different solar cycles and MTFFs were evaluated during occurring of solar flares. Thus a relation between solar flare energy and life time of the satellite electronic part (hours) was obtained.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Influence of wheelchair front caster wheel on reverse directional stability.
Guo, Songfeng; Cooper, Rory A; Corfman, Tom; Ding, Dan; Grindle, Garrett
2003-01-01
The purpose of this research was to study directional stability during reversing of rear-wheel drive, electric powered wheelchairs (EPW) under different initial front caster orientations. Specifically, the weight distribution differences caused by certain initial caster orientations were examined as a possible mechanism for causing directional instability that could lead to accidents. Directional stability was quantified by measuring the drive direction error of the EPW by a motion analysis system. The ground reaction forces were collected to determine the load on the front casters, as well as back-emf data to attain the speed of the motors. The drive direction error was found to be different for various initial caster orientations. Drive direction error was greatest when both casters were oriented 90 degrees to the left or right, and least when both casters were oriented forward. The results show that drive direction error corresponds to the loading difference on the casters. The data indicates that loading differences may cause asymmetric drag on the casters, which in turn causes unbalanced torque load on the motors. This leads to a difference in motor speed and drive direction error.
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
How Do Speakers Avoid Ambiguous Linguistic Expressions?
ERIC Educational Resources Information Center
Ferreira, V.S.; Slevc, L.R.; Rogers, E.S.
2005-01-01
Three experiments assessed how speakers avoid linguistically and nonlinguistically ambiguous expressions. Speakers described target objects (a flying mammal, bat) in contexts including foil objects that caused linguistic (a baseball bat) and nonlinguistic (a larger flying mammal) ambiguity. Speakers sometimes avoided linguistic-ambiguity, and they…
Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.
Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei
2017-07-20
This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.
Cohen, Michael R; Smetzer, Judy L
2014-07-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
Dynamic performance of an aero-assist spacecraft - AFE
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1992-01-01
Dynamic performance of the Aero-assist Flight Experiment (AFE) spacecraft was investigated using a high-fidelity 6-DOF simulation model. Baseline guidance logic, control logic, and a strapdown navigation system to be used on the AFE spacecraft are also modeled in the 6-DOF simulation. During the AFE mission, uncertainties in the environment and the spacecraft are described by an error space which includes both correlated and uncorrelated error sources. The principal error sources modeled in this study include navigation errors, initial state vector errors, atmospheric variations, aerodynamic uncertainties, center-of-gravity off-sets, and weight uncertainties. The impact of the perturbations on the spacecraft performance is investigated using Monte Carlo repetitive statistical techniques. During the Solid Rocket Motor (SRM) deorbit phase, a target flight path angle of -4.76 deg at entry interface (EI) offers very high probability of avoiding SRM casing skip-out from the atmosphere. Generally speaking, the baseline designs of the guidance, navigation, and control systems satisfy most of the science and mission requirements.
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2017-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28179735
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2017-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your in-service training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:29276260
NASA Astrophysics Data System (ADS)
Rr Chusnul, C.; Mardiyana, S., Dewi Retno
2017-12-01
Problem solving is the basis of mathematics learning. Problem solving teaches us to clarify an issue coherently in order to avoid misunderstanding information. Sometimes there may be mistakes in problem solving due to misunderstanding the issue, choosing a wrong concept or misapplied concept. The problem-solving test was carried out after students were given treatment on learning by using cooperative learning of TTW type. The purpose of this study was to elucidate student problem regarding to problem solving errors after learning by using cooperative learning of TTW type. Newman stages were used to identify problem solving errors in this study. The new research used a descriptive method to find out problem solving errors in students. The subject in this study were students of Vocational Senior High School (SMK) in 10th grade. Test and interview was conducted for data collection. Thus, the results of this study suggested problem solving errors in students after learning by using cooperative learning of TTW type for Newman stages.
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2016-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28057945
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2016-01-01
ABSTRACT These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural-safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were receivedthrough the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:27928183
Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander
2017-08-01
Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Cohen, Michael R.; Smetzer, Judy L.
2015-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:26715797
Prevalence and causes of blindness in children in Vietnam.
Limburg, Hans; Gilbert, Clare; Hon, Do Nhu; Dung, Nguyen Chi; Hoang, Tran Huy
2012-02-01
To estimate the prevalence of blindness in children in Vietnam and to assess the major causes. A population-based study sampled children from 16 provinces across Vietnam. The second study examined children attending all blind schools in Vietnam. In 16 provinces, 28 800 children aged 0-15 were sampled. In 28 blind schools, 569 children aged 0-15 were examined. In children not seeing well according to the parents, presenting visual acuity (PVA) was assessed. If PVA was <3/60 in one or both eyes, the child was examined by an ophthalmologist. All children in blind schools were examined by a pediatric ophthalmologist. Blindness was defined as PVA <3/60 in the better eye. Causes of visual loss were classified using the World Health Organization classification. In the population-based study, 22 children had a PVA <3/60 in the better eye, a prevalence of 7.6/10 000 children (95% confidence interval [CI], 4.9-11.8/10 000). Fourteen children had a pinhole visual acuity <3/60 in the better eye, a prevalence of 4.9/10 000 (95% CI, 2.8-8.4/10 000). An estimated 16 400 (95% CI, 10 500-25 300), children were blind from all causes, with 36.4% from uncorrected refractive errors. In the blind schools, 411 children had a PVA <3/60 in the better eye and 55.5% were male. Conditions of the retina (24.6%) and cornea (24.0%) predominated. Retinopathy of prematurity (ROP) caused blindness in 32.6% of children younger than 10 years, but in only 6% of older children. The converse was true for corneal scarring and phthisis (14.0% and 27.3%, respectively). All other causes were similar between age groups (53.5% and 66.7%, respectively). More than half of all causes were avoidable. Vietnam is developing very rapidly, and this is impacting health indices. The mortality rate of those younger than 5 years declined from 65/100 live births in 1980 to 14/100 in 2008. The findings of this study show these changes, because the childhood blindness prevalence was relatively low, and the causes show improved control of measles and vitamin A deficiency, as well as increased services for premature babies. Eye care services for children should now focus on refractive errors, cataract, and control of ROP. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Common Scientific and Statistical Errors in Obesity Research
George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.
2015-01-01
We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280
The Relationship Between Work Commitment, Dynamic, and Medication Error.
Rezaiamin, Abdoolkarim; Pazokian, Marzieh; Zagheri Tafreshi, Mansoureh; Nasiri, Malihe
2017-05-01
Incidence of medication errors in intensive care unit (ICU) can cause irreparable damage for ICU patients. Therefore, it seems necessary to find the causes of medication errors in this section. Work commitment and dynamic might affect the incidence of medication errors in ICU. To assess the mentioned hypothesis, we performed a descriptive-analytical study which was carried out on 117 nurses working in ICU of educational hospitals in Tehran. Minick et al., Salyer et al., and Wakefield et al. scales were used for data gathering on work commitment, dynamic, and medication errors, respectively. Findings of the current study revealed that high work commitment in ICU nurses caused low number of medication errors, including intravenous and nonintravenous. We controlled the effects of confounding variables in detection of this relationship. In contrast, no significant association was found between work dynamic and different types of medication errors. Although the study did not observe any relationship between the dynamics and rate of medication errors, the training of nurses or nursing students to create a dynamic environment in hospitals can increase their interest in the profession and increase job satisfaction in them. Also they must have enough ability in work dynamic so that they don't confused and distracted result in frequent changes of orders, care plans, and procedures.
Anderson, Cheryl I; Nelson, Catherine S; Graham, Corey F; Mosher, Benjamin D; Gohil, Kartik N; Morrison, Chet A; Schneider, Paul D; Kepros, John P
2012-09-01
Performance improvement driven by the review of surgical morbidity and mortality is often limited to critiques of individual cases with a focus on individual errors. Little attention has been given to an analysis of why a decision seemed right at the time or to lower-level root causes. The application of scientific performance improvement has the potential to bring to light deeper levels of understanding of surgical decision-making, care processes, and physician psychology. A comprehensive retrospective chart review of previously discussed morbidity and mortality cases was performed with an attempt to identify areas where we could better understand or influence behavior or systems. We avoided focusing on traditional sources of human error such as lapses of vigilance or memory. An iterative process was used to refine the practical areas for possible intervention. Definitions were then created for the major categories and subcategories. Of a sample of 152 presented cases, the root cause for 96 (63%) patient-related events was identified as uni-factorial in origin, with 51 (34%) cases strictly related to patient disease with no other contributing causes. Fifty-six cases (37%) had multiple causes. The remaining 101 cases (66%) were categorized into two areas where the ability to influence outcomes appeared possible. Technical issues were found in 27 (18%) of these cases and 74 (74%) were related to disorganized care problems. Of the 74 cases identified with disorganized care, 42 (42%) were related to failures in critical thinking, 18 (18%) to undisciplined treatment strategies, 8 (8%) to structural failures, and 6 (6%) were related to failures in situational awareness. On a comprehensive review of cases presented at the morbidity and mortality conference, disorganized care played a large role in the cases presented and may have implications for future curriculum changes. The failure to think critically, to deliver disciplined treatment strategies, to recognize structural failures, and to achieve situational awareness contributed to the morbidities and mortalities. Future research may determine if focused training in these areas improves patient outcomes. Copyright © 2012 Elsevier Inc. All rights reserved.
From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact
Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael
2005-01-01
General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096
Prediction of transmission distortion for wireless video communication: analysis.
Chen, Zhifeng; Wu, Dapeng
2012-03-01
Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.
ERIC Educational Resources Information Center
Good, Geoff
1997-01-01
Safety qualifications for adventure education are not intended to prevent the enjoyment of adventure. Good training enables participants to avoid basic errors and tackle greater adventure sooner. Discusses the need to balance individual freedom with responsibility, and how the Lyme Bay canoeing deaths prompted increased concern in Great Britain…
Warenits, Alexandra-Maria; Sterz, Fritz; Schober, Andreas; Ettl, Florian; Magnet, Ingrid Anna Maria; Högler, Sandra; Teubenbacher, Ursula; Grassmann, Daniel; Wagner, Michael; Janata, Andreas; Weihs, Wolfgang
2016-12-01
Extracorporeal life support is a promising concept for selected patients in refractory cardiogenic shock and for advanced life support of persistent ventricular fibrillation cardiac arrest. Animal models of ventricular fibrillation cardiac arrest could help to investigate new treatment strategies for successful resuscitation. Associated procedural pitfalls in establishing a rat model of extracorporeal life support resuscitation need to be replaced, refined, reduced, and reported.Anesthetized male Sprague-Dawley rats (350-600 g) (n = 126) underwent cardiac arrest induced with a pacing catheter placed into the right ventricle via a jugular cannula. Rats were resuscitated with extracorporeal life support, mechanical ventilation, defibrillation, and medication. Catheter and cannula explantation was performed if restoration of spontaneous circulation was achieved. All observed serious adverse events (SAEs) occurring in each of the experimental phases were analyzed.Restoration of spontaneous circulation could be achieved in 68 of 126 rats (54%); SAEs were observed in 76 (60%) experiments. Experimental procedures related SAEs were 62 (82%) and avoidable human errors were 14 (18%). The most common serious adverse events were caused by insertion or explantation of the venous bypass cannula and resulted in lethal bleeding, cannula dislocation, or air embolism.Establishing an extracorporeal life support model in rats has confronted us with technical challenges. Even advancements in small animal critical care management over the years delivered by an experienced team and technical modifications were not able to totally avoid such serious adverse events. Replacement, refinement, and reduction reports of serious adverse events demanding study exclusions to avoid animal resources are missing and are presented hereby.
Model-based error diffusion for high fidelity lenticular screening.
Lau, Daniel; Smith, Trebor
2006-04-17
Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
Error management for musicians: an interdisciplinary conceptual framework
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Error management for musicians: an interdisciplinary conceptual framework.
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.
A survey of community members' perceptions of medical errors in Oman
Al-Mandhari, Ahmed S; Al-Shafaee, Mohammed A; Al-Azri, Mohammed H; Al-Zakwani, Ibrahim S; Khan, Mushtaq; Al-Waily, Ahmed M; Rizvi, Syed
2008-01-01
Background Errors have been the concern of providers and consumers of health care services. However, consumers' perception of medical errors in developing countries is rarely explored. The aim of this study is to assess community members' perceptions about medical errors and to analyse the factors affecting this perception in one Middle East country, Oman. Methods Face to face interviews were conducted with heads of 212 households in two villages in North Al-Batinah region of Oman selected because of close proximity to the Sultan Qaboos University (SQU), Muscat, Oman. Participants' perceived knowledge about medical errors was assessed. Responses were coded and categorised. Analyses were performed using Pearson's χ2, Fisher's exact tests, and multivariate logistic regression model wherever appropriate. Results Seventy-eight percent (n = 165) of participants believed they knew what was meant by medical errors. Of these, 34% and 26.5% related medical errors to wrong medications or diagnoses, respectively. Understanding of medical errors was correlated inversely with age and positively with family income. Multivariate logistic regression revealed that a one-year increase in age was associated with a 4% reduction in perceived knowledge of medical errors (CI: 1% to 7%; p = 0.045). The study found that 49% of those who believed they knew the meaning of medical errors had experienced such errors. The most common consequence of the errors was severe pain (45%). Of the 165 informed participants, 49% felt that an uncaring health care professional was the main cause of medical errors. Younger participants were able to list more possible causes of medical errors than were older subjects (Incident Rate Ratio of 0.98; p < 0.001). Conclusion The majority of participants believed they knew the meaning of medical errors. Younger participants were more likely to be aware of such errors and could list one or more causes. PMID:18664245
A system dynamic simulation model for managing the human error in power tools industries
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2017-10-01
In the era of modern and competitive life of today, every organization will face the situations in which the work does not proceed as planned when there is problems occur in which it had to be delay. However, human error is often cited as the culprit. The error that made by the employees would cause them have to spend additional time to identify and check for the error which in turn could affect the normal operations of the company as well as the company's reputation. Employee is a key element of the organization in running all of the activities of organization. Hence, work performance of the employees is a crucial factor in organizational success. The purpose of this study is to identify the factors that cause the increasing errors make by employees in the organization by using system dynamics approach. The broadly defined targets in this study are employees in the Regional Material Field team from purchasing department in power tools industries. Questionnaires were distributed to the respondents to obtain their perceptions on the root cause of errors make by employees in the company. The system dynamics model was developed to simulate the factor of the increasing errors make by employees and its impact. The findings of this study showed that the increasing of error make by employees was generally caused by the factors of workload, work capacity, job stress, motivation and performance of employees. However, this problem could be solve by increased the number of employees in the organization.
Avoiding Human Error in Mission Operations: Cassini Flight Experience
NASA Technical Reports Server (NTRS)
Burk, Thomas A.
2012-01-01
Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.
Fargen, Kyle M; Friedman, William A
2014-01-01
During the last 2 decades, there has been a shift in the U.S. health care system towards improving the quality of health care provided by enhancing patient safety and reducing medical errors. Unfortunately, surgical complications, patient harm events, and malpractice claims remain common in the field of neurosurgery. Many of these events are potentially avoidable. There are an increasing number of publications in the medical literature in which authors address cognitive errors in diagnosis and treatment and strategies for reducing such errors, but these are for the most part absent in the neurosurgical literature. The purpose of this article is to highlight the complexities of medical decision making to a neurosurgical audience, with the hope of providing insight into the biases that lead us towards error and strategies to overcome our innate cognitive deficiencies. To accomplish this goal, we review the current literature on medical errors and just culture, explain the dual process theory of cognition, identify common cognitive errors affecting neurosurgeons in practice, review cognitive debiasing strategies, and finally provide simple methods that can be easily assimilated into neurosurgical practice to improve clinical decision making. Copyright © 2014 Elsevier Inc. All rights reserved.
Khan, Anam M; Urquia, Marcelo; Kornas, Kathy; Henry, David; Cheng, Stephanie Y; Bornbaum, Catherine
2017-01-01
Background Immigrants have been shown to possess a health advantage, yet are also more likely to reside in arduous economic conditions. Little is known about if and how the socioeconomic gradient for all-cause, premature and avoidable mortality differs according to immigration status. Methods Using several linked population-based vital and demographic databases from Ontario, we examined a cohort of all deaths in the province between 2002 and 2012. We constructed count models, adjusted for relevant covariates, to attain age-adjusted mortality rates and rate ratios for all-cause, premature and avoidable mortality across income quintile in immigrants and long-term residents, stratified by sex. Results A downward gradient in age-adjusted all-cause mortality was observed with increasing income quintile, in immigrants (males: Q5: 13.32, Q1: 20.18; females: Q5: 9.88, Q1: 12.51) and long-term residents (males: Q5: 33.25, Q1: 57.67; females: Q5: 22.31, Q1: 36.76). Comparing the lowest and highest income quintiles, male and female immigrants had a 56% and 28% lower all-cause mortality rate, respectively. Similar trends were observed for premature and avoidable mortality. Although immigrants had consistently lower mortality rates compared with long-term residents, trends only differed statistically across immigration status for females (p<0.05). Conclusions This study illustrated the presence of income disparities as it pertains to all-cause, premature, and avoidable mortality, irrespective of immigration status. Additionally, the immigrant health advantage was observed and income disparities were less pronounced in immigrants compared with long-term residents. These findings support the need to examine the factors that drive inequalities in mortality within and across immigration status. PMID:28289039
NASA Astrophysics Data System (ADS)
Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao
2010-09-01
The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
Torres, Luis G.; Kuntz, Alan; Gilbert, Hunter B.; Swaney, Philip J.; Hendrick, Richard J.; Webster, Robert J.; Alterovitz, Ron
2015-01-01
Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot’s shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot’s tip. However, the robot’s unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot’s shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles. PMID:26413381
Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron
2015-05-01
Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.
Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.
Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian
2010-01-01
The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
Development of a Precise Polarization Modulator for UV Spectropolarimetry
NASA Astrophysics Data System (ADS)
Ishikawa, S.; Shimizu, T.; Kano, R.; Bando, T.; Ishikawa, R.; Giono, G.; Tsuneta, S.; Nakayama, S.; Tajima, T.
2015-10-01
We developed a polarization modulation unit (PMU) to rotate a waveplate continuously in order to observe solar magnetic fields by spectropolarimetry. The non-uniformity of the PMU rotation may cause errors in the measurement of the degree of linear polarization (scale error) and its angle (crosstalk between Stokes-Q and -U), although it does not cause an artificial linear polarization signal (spurious polarization). We rotated a waveplate with the PMU to obtain a polarization modulation curve and estimated the scale error and crosstalk caused by the rotation non-uniformity. The estimated scale error and crosstalk were {<} 0.01 % for both. This PMU will be used as a waveplate motor for the Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP) rocket experiment. We confirm that the PMU performs and functions sufficiently well for CLASP.
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett
2014-09-01
Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.
Identification of driver errors : overview and recommendations
DOT National Transportation Integrated Search
2002-08-01
Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...
Dupuis, O; Dupont, C; Gaucherand, P; Rudigoz, R-C; Fernandez, M P; Peigne, E; Labaune, J M
2007-09-01
To determine the frequency of avoidable neonatal neurological damage. We carried out a retrospective study from January 1st to December 31st 2003, including all children transferred from a level I or II maternity unit for suspected neurological damage (SND). Only cases confirmed by a persistent abnormality on clinical examination, EEG, transfontanelle ultrasound scan, CT scan or cerebral MRI were retained. Each case was studied in detail by an expert committee and classified as "avoidable", "unavoidable" or "of indeterminate avoidability." The management of "avoidable" cases was analysed to identify potentially avoidable factors (PAFs): not taking into account a major risk factor (PAF1), diagnostic errors (PAF2), suboptimal decision to delivery interval (PAF3) and mechanical complications (PAF4). In total, 77 children were transferred for SND; two cases were excluded (inaccessible medical files). Forty of the 75 cases of SND included were confirmed: 29 were "avoidable", 8 were "unavoidable" and 3 were "of indeterminate avoidability". Analysis of the 29 avoidable cases identified 39 PAFs: 18 PAF1, 5 PAF2, 10 PAF3 and 6 PAF4. Five had no classifiable PAF (0 death), 11 children had one type of PAF (one death), 11 children had two types of PAF (3 deaths), 2 had three types of PAF (2 deaths). Three quarters of the confirmed cases of neurological damage occurring in levels I and II maternity units of the Aurore network in 2003 were avoidable. Five out of six cases resulting in early death involved several potentially avoidable factors.
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Summary of avoidable cancers in the Nordic countries.
Olsen, J H; Andersen, A; Dreyer, L; Pukkala, E; Tryggvadottir, L; Gerhardsson de Verdier, M; Winther, J F
1997-01-01
An overview is given of the most important known causes of cancer in the five Nordic countries and the resulting number of cancers that are potentially avoidable. The main causes include active and passive smoking, alcohol consumption, exposure to asbestos and other occupational carcinogens, solar and ionizing radiation, obesity, human papillomavirus infection in the female genital tract and infection with Helicobacter pylori. The organs most commonly affected are those of the respiratory system, the upper digestive tract and stomach, skin, the lower urinary tract and the uterine cervix. Annually, more than 18,000 cancers in men and 11,000 in women in the Nordic populations could be avoided by eliminating exposure to known carcinogens which is equivalent to 33% and 20% of all cancers arising in men and women, respectively, around the year 2000. Smoking habits account for a little more than half of these avoidable cases. Estimates of avoidable cancers are given for each Nordic country, separately.
Improving AIRS Radiance Spectra in High Contrast Scenes Using MODIS
NASA Technical Reports Server (NTRS)
Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.
2015-01-01
The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 microns with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match AIRS and MODIS radiances in non-uniform scenes.
Zardo, Pauline; Graves, Nicholas
2018-01-01
The “publish or perish” incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have “child” labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of “child” and “parent” labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits’ efficacy. The main benefit of the audits was via the increase in effort in “child” and “parent” labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit. PMID:29649314
Barnett, Adrian G; Zardo, Pauline; Graves, Nicholas
2018-01-01
The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E. M. C.; Reu, P. L.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
Jones, E. M. C.; Reu, P. L.
2017-11-28
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Singh, Nakul; Eeda, Shiva Shankar; Gudapati, Bala Krishna; Reddy, Srinivasa; Kanade, Pushkar; Shantha, Ghanshyam Palamaner Subash; Rani, Padmaja Kumari; Chakrabarti, Subhabrata; Khanna, Rohit C
2014-01-01
To assess the prevalence of blindness and visual impairment (VI), their associated causes and underlying risk factors in three tribal areas of Andhra Pradesh, India and compare this data in conjunction with data from other countries with low and middle income settings. Using a validated Rapid Assessment of Avoidable Blindness methodology, a two stage sampling survey was performed in these areas involving probability proportionate to size sampling and compact segment sampling methods. Blindness, VI and severe visual impairment (SVI) were defined as per the WHO guidelines and Indian definitions. Based on a prior enumeration, 7281 (97.1%) subjects were enrolled (mean age = 61.0+/-7.9 years). Based on the presenting visual acuity (PVA), the prevalences of VI, SVI and blindness were 16.9% (95% CI: 15.7-18.1), 2.9% (95% CI: 2.5-3.4), and 2.3% (95% CI: 1.9-2.7), respectively. When based on the Pinhole corrected visual acuity (PCVA), the prevalences were lower in VI (6.2%, 95% CI: 5.4-6.9), SVI (1.5%, 95% CI: 1.2-1.9) and blindness (2.1%, 95% CI: 1.7-2.5). Refractive error was the major cause of VI (71.4%), whereas, cataract was the major cause of SVI and blindness (70.3%). Based on the PVA, the odds ratio (OR) of blindness increased in the age groups of 60-69 years (OR = 3.8, 95% CI: 2.8, 5.1), 70-79 years (OR = 10.6, 95% CI: 7.2, 15.5) and 80 years and above (OR = 30.7, 95% CI: 19.2, 49). The ORs were relatively higher in females (OR = 1.3, 95% CI: 1.0, 1.6) and illiterate subjects (OR = 4.3, 95% CI: 2.2, 8.5), but lower in those wearing glasses (OR = 0.2, 95% CI: 0.1, 0.4). This is perhaps the first study to assess the prevalence of blindness and VI in these tribal regions and the majority of the causes of blindness and SVI were avoidable (88.5%). These findings may be useful for planning eye care services in these underserved regions.
Singh, Nakul; Eeda, Shiva Shankar; Gudapati, Bala Krishna; Reddy, Srinivasa; Kanade, Pushkar; Shantha, Ghanshyam Palamaner Subash; Rani, Padmaja Kumari; Chakrabarti, Subhabrata; Khanna, Rohit C
2014-01-01
Objective To assess the prevalence of blindness and visual impairment (VI), their associated causes and underlying risk factors in three tribal areas of Andhra Pradesh, India and compare this data in conjunction with data from other countries with low and middle income settings. Methods Using a validated Rapid Assessment of Avoidable Blindness methodology, a two stage sampling survey was performed in these areas involving probability proportionate to size sampling and compact segment sampling methods. Blindness, VI and severe visual impairment (SVI) were defined as per the WHO guidelines and Indian definitions. Results Based on a prior enumeration, 7281 (97.1%) subjects were enrolled (mean age = 61.0+/−7.9 years). Based on the presenting visual acuity (PVA), the prevalences of VI, SVI and blindness were 16.9% (95% CI: 15.7–18.1), 2.9% (95% CI: 2.5–3.4), and 2.3% (95% CI: 1.9–2.7), respectively. When based on the Pinhole corrected visual acuity (PCVA), the prevalences were lower in VI (6.2%, 95% CI: 5.4–6.9), SVI (1.5%, 95% CI: 1.2–1.9) and blindness (2.1%, 95% CI: 1.7–2.5). Refractive error was the major cause of VI (71.4%), whereas, cataract was the major cause of SVI and blindness (70.3%). Based on the PVA, the odds ratio (OR) of blindness increased in the age groups of 60–69 years (OR = 3.8, 95% CI: 2.8, 5.1), 70–79 years (OR = 10.6, 95% CI: 7.2, 15.5) and 80 years and above (OR = 30.7, 95% CI: 19.2, 49). The ORs were relatively higher in females (OR = 1.3, 95% CI: 1.0, 1.6) and illiterate subjects (OR = 4.3, 95% CI: 2.2, 8.5), but lower in those wearing glasses (OR = 0.2, 95% CI: 0.1, 0.4). Conclusions This is perhaps the first study to assess the prevalence of blindness and VI in these tribal regions and the majority of the causes of blindness and SVI were avoidable (88.5%). These findings may be useful for planning eye care services in these underserved regions. PMID:25007075
Risk-Aware Planetary Rover Operation: Autonomous Terrain Classification and Path Planning
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Fuchs, Thoams J.; Steffy, Amanda; Maimone, Mark; Yen, Jeng
2015-01-01
Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed groundbased Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers.
Underlying Cause(s) of Letter Perseveration Errors
Fischer-Baum, Simon; Rapp, Brenda
2011-01-01
Perseverations, the inappropriate intrusion of elements from a previous response into a current response, are commonly observed in individuals with acquired deficits. This study specifically investigates the contribution of failure-to activate and failure-to-inhibit deficit(s) in the generation of letter perseveration errors in acquired dysgraphia. We provide evidence from the performance 12 dysgraphic individuals indicating that a failure to activate graphemes for a target word gives rise to letter perseveration errors. In addition, we also provide evidence that, in some individuals, a failure-to-inhibit deficit may also contribute to the production of perseveration errors. PMID:22178232
Measuring quality in anatomic pathology.
Raab, Stephen S; Grzybicki, Dana Marie
2008-06-01
This article focuses mainly on diagnostic accuracy in measuring quality in anatomic pathology, noting that measuring any quality metric is complex and demanding. The authors discuss standardization and its variability within and across areas of care delivery and efforts involving defining and measuring error to achieve pathology quality and patient safety. They propose that data linking error to patient outcome are critical for developing quality improvement initiatives targeting errors that cause patient harm in addition to using methods of root cause analysis, beyond those traditionally used in cytologic-histologic correlation, to assist in the development of error reduction and quality improvement plans.
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
Conflict Monitoring in Dual Process Theories of Thinking
ERIC Educational Resources Information Center
De Neys, Wim; Glumicic, Tamara
2008-01-01
Popular dual process theories have characterized human thinking as an interplay between an intuitive-heuristic and demanding-analytic reasoning process. Although monitoring the output of the two systems for conflict is crucial to avoid decision making errors there are some widely different views on the efficiency of the process. Kahneman…
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Credit means the right granted by a creditor to an applicant to defer payment of a debt, incur debt and defer its payment, or purchase property or services and defer payment therefor. (k) Credit card means... notwithstanding the maintenance of procedures reasonably adapted to avoid such errors. (t) Judgmental system of...
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Credit means the right granted by a creditor to an applicant to defer payment of a debt, incur debt and defer its payment, or purchase property or services and defer payment therefor. (k) Credit card means... notwithstanding the maintenance of procedures reasonably adapted to avoid such errors. (t) Judgmental system of...
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Credit means the right granted by a creditor to an applicant to defer payment of a debt, incur debt and defer its payment, or purchase property or services and defer payment therefor. (k) Credit card means... notwithstanding the maintenance of procedures reasonably adapted to avoid such errors. (t) Judgmental system of...
Sampling command generator corrects for noise and dropouts in recorded data
NASA Technical Reports Server (NTRS)
Anderson, T. O.
1973-01-01
Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.
La parole, vue et prise par les etudiants (Speech as Seen and Understood by Student).
ERIC Educational Resources Information Center
Gajo, Laurent, Ed.; Jeanneret, Fabrice, Ed.
1998-01-01
Articles on speech and second language learning include: "Les sequences de correction en classe de langue seconde: evitement du 'non' explicite" ("Error Correction Sequences in Second Language Class: Avoidance of the Explicit 'No'") (Anne-Lise de Bosset); "Analyse hierarchique et fonctionnelle du discours: conversations…