Arendrup, Maiken Cavling; Garcia-Effron, Guillermo; Lass-Flörl, Cornelia; Lopez, Alicia Gomez; Rodriguez-Tudela, Juan-Luis; Cuenca-Estrella, Manuel; Perlin, David S.
2010-01-01
This study compared nine susceptibility testing methods and 12 endpoints for anidulafungin, caspofungin, and micafungin with the same collection of blinded FKS hot spot mutant (n = 29) and wild-type isolates (n = 94). The susceptibility tests included EUCAST Edef 7.1, agar dilution, Etest, and disk diffusion with RPMI-1640 plus 2% glucose (2G) and IsoSensitest-2G media and CLSI M27A-3. Microdilution plates were read after 24 and 48 h. The following test parameters were evaluated: fks hot spot mutants overlapping the wild-type distribution, distance between the two populations, number of very major errors (VMEs; fks mutants misclassified as susceptible), and major errors (MEs; wild-type isolates classified as resistant) using a wild-type-upper-limit value (WT-UL) (two twofold-dilutions higher than the MIC50) as the susceptibility breakpoint. The methods with the lowest number of errors (given as VMEs/MEs) across the three echinocandins were CLSI (12%/1%), agar dilution with RPMI-2G medium (14%/0%), and Etest with RPMI-2G medium (8%/3%). The fewest errors overall were observed for anidulafungin (4%/1% for EUCAST, 4%/3% for CLSI, and 3%/9% for Etest with RPMI-2G). For micafungin, VME rates of 10 to 71% were observed. For caspofungin, agar dilution with either medium was superior (VMEs/MEs of 0%/1%), while CLSI, EUCAST with IsoSensitest-2G medium, and Etest were less optimal (VMEs of 7%, 10%, and 10%, respectively). Applying the CLSI breakpoint (S ≤ 2 μg/ml) for CLSI results, 89.2% fks hot spot mutants were classified as anidulafungin susceptible, 60.7% as caspofungin susceptible, and 92.9% as micafungin susceptible. In conclusion, no test was perfect, but anidulafungin susceptibility testing using the WT-UL to define susceptibility reliably identified fks hot spot mutants. PMID:19884370
Bobenchik, April M.; Hindler, Janet A.; Giltner, Carmen L.; Saeki, Sandra
2014-01-01
Vitek 2 (bioMérieux, Inc., Durham, NC) is a widely used commercial antimicrobial susceptibility testing system. We compared MIC results obtained by Vitek 2 to those obtained by the Clinical and Laboratory Standards Institute (CLSI) broth microdilution (BMD) reference method for 134 staphylococcal and 84 enterococcal clinical isolates. Nineteen agents were evaluated, including all those available on Vitek 2 for testing staphylococci and enterococci. The resistance phenotypes tested included methicillin-resistant Staphylococcus aureus (MRSA) (n = 58), S. aureus with inducible clindamycin resistance (ICR) (n = 30), trimethoprim-sulfamethoxazole-resistant MRSA (n = 10), vancomycin-resistant Enterococcus (n = 37), high-level gentamicin-resistant Enterococcus (n = 15), linezolid-resistant Enterococcus (n = 5), and daptomycin-nonsusceptible Enterococcus faecalis (n = 6). For the staphylococci, there was 98.9% categorical agreement (CA). There was one very major error (VME) for gentamicin in a Staphylococcus hominis isolate, six VMEs for inducible clindamycin in S. aureus isolates, and two major errors (ME) for daptomycin in an S. aureus and a Staphylococcus epidermidis isolate. For enterococci, there was 97.3% CA. Two VMEs were observed for daptomycin in isolates of E. faecalis and 2 ME, 1 for high-level gentamicin resistance and 1 for nitrofurantoin, in E. faecium isolates. Overall, there was 98.3% CA and 99% essential agreement for the testing of staphylococci and enterococci by the Vitek 2. With the exception of detecting ICR in S. aureus, Vitek 2 performed reliably for antimicrobial susceptibility testing of staphylococci and enterococci. PMID:24478467
Idelevich, Evgeny A.; Grunewald, Camilla M.; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding. PMID:25489741
Idelevich, Evgeny A; Grunewald, Camilla M; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥ 1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤ 1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding.
Mittman, Scott A.; Huard, Richard C.; Della-Latta, Phyllis; Whittier, Susan
2009-01-01
The performance of the BD Phoenix Automated Microbiology System (BD Diagnostic Systems) was compared to those of the Vitek 2 (bioMérieux), the MicroScan MICroSTREP plus (Siemens), and Etest (bioMérieux) for antibiotic susceptibility tests (AST) of 311 clinical isolates of Streptococcus pneumoniae. The overall essential agreement (EA) between each test system and the reference microdilution broth reference method for S. pneumoniae AST results was >95%. For Phoenix, the EAs of individual antimicrobial agents ranged from 90.4% (clindamycin) to 100% (vancomycin and gatifloxacin). The categorical agreements (CA) of Phoenix, Vitek 2, MicroScan, and Etest for penicillin were 95.5%, 94.2%, 98.7%, and 97.7%, respectively. The overall CA for Phoenix was 99.3% (1 very major error [VME] and 29 minor errors [mEs]), that for Vitek 2 was 98.8% (7 VMEs and 28 mEs), and those for MicroScan and Etest were 99.5% each (19 and 13 mEs, respectively). The average times to results for Phoenix, Vitek 2, and the manual methods were 12.1 h, 9.8 h, and 24 h, respectively. From these data, the Phoenix AST results demonstrated a high degree of agreement with all systems evaluated, although fewer VMEs were observed with the Phoenix than with the Vitek 2. Overall, both automated systems provided reliable AST results for the S. pneumoniae-antibiotic combinations in half the time required for the manual methods, rendering them more suitable for the demands of expedited reporting in the clinical setting. PMID:19741088
Acidic pH modulation of Na+ channels in trigeminal mesencephalic nucleus neurons.
Kang, In-Sik; Cho, Jin-Hwa; Choi, In-Sun; Kim, Do-Yeon; Jang, Il-Sung
2016-12-07
Cell bodies of trigeminal mesencephalic nucleus (Vmes) neurons are located within the central nervous system, and therefore, peripheral as well as central acidosis can modulate the excitability of Vmes neurons. Here, we report the effect of acidic pH on voltage-gated Na channels in acutely isolated rat Vmes neurons using a conventional whole-cell patch clamp technique. Acidic pH (pH 6.0) slightly but significantly shifted both the activation and steady-state fast inactivation relationships toward depolarized potentials. However, acidic pH (pH 6.0) had a minor effect on the inactivation kinetics of voltage-gated Na channels. Less sensitivity of voltage-gated Na channels to acidic pH may allow Vmes neurons to transduce the precise proprioceptive information even under acidic pH conditions.
75 FR 1285 - Vehicle-Mounted Earth Stations (VMES)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-11
... Earth Stations (VMES) AGENCY: Federal Communications Commission. ACTION: Final Rule; announcement of...-Mounted Earth Stations in Certain Frequency Bands Allocated to the Fixed-Satellite Service, IB Docket No...(i), 4(j), 7(a), 301, 303(c), 303(f), 303(g), 303(r), 303(y) and 308 of the Communications Act of...
Inhibition of muscle spindle afferent activity during masseter muscle fatigue in the rat.
Brunetti, Orazio; Della Torre, Giovannella; Lucchi, Maria Luisa; Chiocchetti, Roberto; Bortolami, Ruggero; Pettorossi, Vito Enrico
2003-09-01
The influence of muscle fatigue on the jaw-closing muscle spindle activity has been investigated by analyzing: (1) the field potentials evoked in the trigeminal motor nucleus (Vmot) by trigeminal mesencephalic nucleus (Vmes) stimulation, (2) the orthodromic and antidromic responses evoked in the Vmes by stimulation of the peripheral and central axons of the muscle proprioceptive afferents, and (3) the extracellular unitary discharge of masseter muscle spindles recorded in the Vmes. The masseter muscle was fatigued by prolonged tetanic masseter nerve electrical stimulation. Pre- and postsynaptic components of the potentials evoked in the Vmot showed a significant reduction in amplitude following muscle fatigue. Orthodromic and antidromic potentials recorded in the Vmes also showed a similar amplitude decrease. Furthermore, muscle fatigue caused a decrease of the discharge frequency of masseter muscle spindle afferents in most of the examined units. The inhibition of the potential amplitude and discharge frequency was strictly correlated with the extent of muscle fatigue and was mediated by the group III and IV afferent muscle fibers activated by fatigue. In fact, the inhibitory effect was abolished by capsaicin injection in the masseter muscle that provokes selective degeneration of small afferent muscle fibers containing neurokinins. We concluded that fatigue signals originating from the muscle and traveling through capsaicin-sensitive fibers are able to diminish the proprioceptive input by a central presynaptic influence. In the second part of the study, we examined the central projection of the masseter small afferents sensitive to capsaicin at the electron-microscopic level. Fiber degeneration was induced by injecting capsaicin into the masseter muscle. Degenerating terminals were found on the soma and stem process in Vmes and on the dendritic tree of neurons in Vmot. This suggests that small muscle afferents may influence the muscle spindle activity through direct synapses on somata in Vmes and on dendrites of neurons in Vmot.
Arendrup, Maiken Cavling; Park, Steven; Brown, Steven; Pfaller, Michael; Perlin, David S.
2011-01-01
Disk diffusion testing has recently been standardized by the CLSI, and susceptibility breakpoints have been established for several antifungal compounds. For caspofungin, 5-μg disks are approved, and for micafungin, 10-μg disks are under evaluation. We evaluated the performances of caspofungin and micafungin disk testing using a panel of Candida isolates with and without known FKS echinocandin resistance mechanisms. Disk diffusion and microdilution assays were performed strictly according to CLSI documents M44-A2 and M27-A3. Eighty-nine clinical Candida isolates were included: Candida albicans (20 isolates/10 mutants), C. glabrata (19 isolates/10 mutants), C. dubliniensis (2 isolates/1 mutant), C. krusei (16 isolates/3 mutants), C. parapsilosis (14 isolates/0 mutants), and C. tropicalis (18 isolates/4 mutants). Quality control strains were C. parapsilosis ATCC 22019 and C. krusei ATCC 6258. The correlations between zone diameters and MIC results were good for both compounds, with identical susceptibility classifications for 93.3% of the isolates by applying the current CLSI breakpoints. However, the numbers of fks hot spot mutant isolates misclassified as being susceptible (S) (very major errors [VMEs]) were high (61% for caspofungin [S, ≥11 mm] and 93% for micafungin [S, ≥14 mm]). Changing the disk diffusion breakpoint to S at ≥22 mm significantly improved the discrimination. For caspofungin, 1 VME was detected (a C. tropicalis isolate with an F76S substitution) (3.5%), and for micafungin, 10 VMEs were detected, the majority of which were for C. glabrata (8/10). The broadest separation between zone diameter ranges for wild-type (WT) and mutant isolates was seen for caspofungin (6 to 12 mm versus −4 to 7 mm). In conclusion, caspofungin disk diffusion testing with a modified breakpoint led to excellent separation between WT and mutant isolates for all Candida species. PMID:21357293
Penney, Andrew J.; Guinotte, John M.
2013-01-01
United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162
Penney, Andrew J; Guinotte, John M
2013-01-01
United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas.
Bobenchik, April M.; Deak, Eszter; Hindler, Janet A.; Charlton, Carmen L.
2014-01-01
Vitek 2 (bioMérieux Inc., Durham, NC) is a widely used commercial antimicrobial susceptibility test system. We compared the MIC results obtained using the Vitek 2 AST-GN69 and AST-XN06 cards to those obtained by CLSI broth microdilution (BMD) for 255 isolates of Enterobacteriaceae, including 25 isolates of carbapenem-resistant Enterobacteriaceae. In total, 25 antimicrobial agents were examined. For 10 agents, the MIC data were evaluated using two sets of breakpoints: (i) the Vitek 2 breakpoints, which utilized the 2009 FDA breakpoints at the time of the study and are equivalent to the 2009 CLSI M100-S19 breakpoints, and (ii) the 2014 CLSI M100-S24 breakpoints. There was an overall 98.7% essential agreement (EA). The categorical agreement was 95.5% (CA) using the Vitek 2 breakpoints and 95.7% using the CLSI breakpoints. There was 1 very major error (VME) (0.05%) observed using the Vitek 2 breakpoints (cefazolin) and 8 VMEs (0.5%) using the CLSI breakpoints (2 each for aztreonam, cefepime, and ceftriaxone, and 1 for cefazolin and ceftazidime). Fifteen major errors (MEs) (0.4%) were noted using the Vitek 2 breakpoints and 8 (0.5%) using the CLSI breakpoints. Overall, the Vitek 2 performance was comparable to that of BMD for testing a limited number of Enterobacteriaceae commonly isolated by clinical laboratories. Ongoing studies are warranted to assess performance in isolates with emerging resistance. PMID:25540403
1989-03-01
DI _1.3)))an also the wire connecting m419 (id (3))( (tp (P-PORT))(port-of rDim) (m88 ( l l ) (type (P-PORT)) (port-of ( DI -1.1))) (m428 (id (2)) (type (P...research on this project had two dis - tinct but overlapping phases: consolidation of work done during the previous two years and developing new...diagnosis when VMES notices a diagnostic short-cut from the dual device model is present; this will be dis - cussed in the section of "Dual Device Model
Kenchington, Ellen; Murillo, Francisco Javier; Lirette, Camille; Sacau, Mar; Koen-Alonso, Mariano; Kenny, Andrew; Ollerhead, Neil; Wareham, Vonda; Beazley, Lindsay
2014-01-01
The United Nations General Assembly Resolution 61/105, concerning sustainable fisheries in the marine ecosystem, calls for the protection of vulnerable marine ecosystems (VME) from destructive fishing practices. Subsequently, the Food and Agriculture Organization (FAO) produced guidelines for identification of VME indicator species/taxa to assist in the implementation of the resolution, but recommended the development of case-specific operational definitions for their application. We applied kernel density estimation (KDE) to research vessel trawl survey data from inside the fishing footprint of the Northwest Atlantic Fisheries Organization (NAFO) Regulatory Area in the high seas of the northwest Atlantic to create biomass density surfaces for four VME indicator taxa: large-sized sponges, sea pens, small and large gorgonian corals. These VME indicator taxa were identified previously by NAFO using the fragility, life history characteristics and structural complexity criteria presented by FAO, along with an evaluation of their recovery trajectories. KDE, a non-parametric neighbour-based smoothing function, has been used previously in ecology to identify hotspots, that is, areas of relatively high biomass/abundance. We present a novel approach of examining relative changes in area under polygons created from encircling successive biomass categories on the KDE surface to identify “significant concentrations” of biomass, which we equate to VMEs. This allows identification of the VMEs from the broader distribution of the species in the study area. We provide independent assessments of the VMEs so identified using underwater images, benthic sampling with other gear types (dredges, cores), and/or published species distribution models of probability of occurrence, as available. For each VME indicator taxon we provide a brief review of their ecological function which will be important in future assessments of significant adverse impact on these habitats here and elsewhere. PMID:25289667
NASA Astrophysics Data System (ADS)
Clark, M. R.; Gardner, J.; Holland, L.; Zeng, C.; Hamilton, J. S.; Rowden, A. A.
2016-02-01
In the New Zealand region vulnerable marine ecosystems (VMEs) are at risk from commercial fishing activity and future seabed mining. Understanding connectivity among VMEs is important for the design of effective spatial management strategies, i.e. a network of protected areas. To date however, genetic connectivity in the New Zealand region has rarely been documented. As part of a project developing habitat suitability models and spatial management options for VMEs we used DNA sequence data and microsatellite genotyping to assess genetic connectivity for a range of VME indicator taxa, including the coral Desmophyllum dianthus, and the sponges Poecilastra laminaris and Penares palmatoclada. Overall, patterns of connectivity were inconsistent amonst taxa. Nonetheless, genetic data from each taxon were relevant to inform management at a variety of spatial scales. D. dianthus populations in the Kermadec volcanic arc and the Louisville Seamount Chain were indistinguishable, highlighting the importance of considering source-sink dynamics between populations beyond the EEZ in conservation planning. Poecilastra laminaris populations showed significant divergence across the Chatham Rise, in contrast to P. palmatoclada, which had a uniform haplotypic distribution. However, both sponge species exhibited the highest genetic diversity on the Chatham Rise, suggesting that this area is a genetic hotspot. The spatial heterogeneity of genetic patterns of structure suggest that inclusion of several taxa is necessary to facilitate understanding of regional connectivity patterns, variation in which may be attributed to alternate life history strategies, local hydrodynamic regimes, or in some cases, suboptimal sample sizes. Our findings provide important information for use by environmental managers, including summary maps of genetic diversity and barriers to gene flow, which will be used in spatial management decision-support tools.
Kenchington, Ellen; Murillo, Francisco Javier; Lirette, Camille; Sacau, Mar; Koen-Alonso, Mariano; Kenny, Andrew; Ollerhead, Neil; Wareham, Vonda; Beazley, Lindsay
2014-01-01
The United Nations General Assembly Resolution 61/105, concerning sustainable fisheries in the marine ecosystem, calls for the protection of vulnerable marine ecosystems (VME) from destructive fishing practices. Subsequently, the Food and Agriculture Organization (FAO) produced guidelines for identification of VME indicator species/taxa to assist in the implementation of the resolution, but recommended the development of case-specific operational definitions for their application. We applied kernel density estimation (KDE) to research vessel trawl survey data from inside the fishing footprint of the Northwest Atlantic Fisheries Organization (NAFO) Regulatory Area in the high seas of the northwest Atlantic to create biomass density surfaces for four VME indicator taxa: large-sized sponges, sea pens, small and large gorgonian corals. These VME indicator taxa were identified previously by NAFO using the fragility, life history characteristics and structural complexity criteria presented by FAO, along with an evaluation of their recovery trajectories. KDE, a non-parametric neighbour-based smoothing function, has been used previously in ecology to identify hotspots, that is, areas of relatively high biomass/abundance. We present a novel approach of examining relative changes in area under polygons created from encircling successive biomass categories on the KDE surface to identify "significant concentrations" of biomass, which we equate to VMEs. This allows identification of the VMEs from the broader distribution of the species in the study area. We provide independent assessments of the VMEs so identified using underwater images, benthic sampling with other gear types (dredges, cores), and/or published species distribution models of probability of occurrence, as available. For each VME indicator taxon we provide a brief review of their ecological function which will be important in future assessments of significant adverse impact on these habitats here and elsewhere.
NASA Astrophysics Data System (ADS)
Rowden, A. A.; Lundquist, C. J.; Clark, M. R.; Anderson, O. F.; Guinotte, J. M.; Baird, S. J.; Roux, M. J.; Wadhwa, S.
2016-02-01
The South Pacific Regional Fisheries Management Organisation (SPRFMO) Convention includes specific provisions to protect vulnerable marine ecosystems (VMEs). The SPRFMO Commission has determined that the interim measures put in place to protect VMEs would be replaced by an improved system of fishable and closed areas. These closures would effectively represent a preliminary spatial management plan, whereby conservation and management measures are implemented that will result in sustainable fisheries and benthic protection. We used the conservation planning tool Zonation to develop spatial management options that balance the protection of VMEs with utilisation of high value areas for fishing. Input data included habitat suitability maps, and uncertainties associated with these model predictions, for eleven VME indicator taxa (4 Scleractinian coral species; 3 other cnidarian groups (Family Stylasteridae, Order Antipatharia, Order Pennatulacea; 2 classes of sponges (Demospongiae, Hexactinellidae), and 2 echninoderm groups (Crinoidea and Brisingida)) at bathyal depths across the entire SPRFMO area (divided into 1 km2 grid cells); New Zealand fishing catch data (for two different time periods and trawl types); naturalness (represented by proxy variable using the number of trawl tows); and a bioregionalisation scheme. Running various scenario models for spatial planning allowed for the cost to fishing to be determined, in terms of the amount of the trawl catch footprint lost if high priority areas for VME indicator taxa are protected. Generally, the cost to fishing was low given the relatively high proportion of suitable habitat for VME indicator taxa protected. The main outcome of the present study is a demonstration of the practical utility of using available data, including modelled data, and the Zonation conservation planning software tool to develop options for the spatial management of the SPRFMO area.
Lauria, V; Garofalo, G; Fiorentino, F; Massi, D; Milisenda, G; Piraino, S; Russo, T; Gristina, M
2017-08-14
Deep-sea coral assemblages are key components of marine ecosystems that generate habitats for fish and invertebrate communities and act as marine biodiversity hot spots. Because of their life history traits, deep-sea corals are highly vulnerable to human impacts such as fishing. They are an indicator of vulnerable marine ecosystems (VMEs), therefore their conservation is essential to preserve marine biodiversity. In the Mediterranean Sea deep-sea coral habitats are associated with commercially important crustaceans, consequently their abundance has dramatically declined due to the effects of trawling. Marine spatial planning is required to ensure that the conservation of these habitats is achieved. Species distribution models were used to investigate the distribution of two critically endangered octocorals (Funiculina quadrangularis and Isidella elongata) in the central Mediterranean as a function of environmental and fisheries variables. Results show that both species exhibit species-specific habitat preferences and spatial patterns in response to environmental variables, but the impact of trawling on their distribution differed. In particular F. quadrangularis can overlap with fishing activities, whereas I. elongata occurs exclusively where fishing is low or absent. This study represents the first attempt to identify key areas for the protection of soft and compact mud VMEs in the central Mediterranean Sea.
A systematic approach towards the identification and protection of vulnerable marine ecosystems
Ardron, Jeff A.; Clark, Malcolm R.; Penney, Andrew J.; Hourigan, Thomas F.; Rowden, Ashley A.; Dunstan, Piers K.; Watling, Les; Shank, Timothy M.; Tracey, Di M.; Dunn, Matthew R.; Parker, Steven J.
2014-01-01
The United Nations General Assembly in 2006 and 2009 adopted resolutions that call for the identification and protection of vulnerable marine ecosystems (VMEs) from significant adverse impacts of bottom fishing. While general criteria have been produced, there are no guidelines or protocols that elaborate on the process from initial identification through to the protection of VMEs. Here, based upon an expert review of existing practices, a 10-step framework is proposed: (1) Comparatively assess potential VME indicator taxa and habitats in a region; (2) determine VME thresholds; (3) consider areas already known for their ecological importance; (4) compile information on the distributions of likely VME taxa and habitats, as well as related environmental data; (5) develop predictive distribution models for VME indicator taxa and habitats; (6) compile known or likely fishing impacts; (7) produce a predicted VME naturalness distribution (areas of low cumulative impacts); (8) identify areas of higher value to user groups; (9) conduct management strategy evaluations to produce trade-off scenarios; (10) review and re-iterate, until spatial management scenarios are developed that fulfil international obligations and regional conservation and management objectives. To date, regional progress has been piecemeal and incremental. The proposed 10-step framework combines these various experiences into a systematic approach.
47 CFR 25.218 - Off-axis EIRP envelopes for FSS earth station operations.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) ESV and VMES applications, (2) Analog video earth station applications, (3) Applications for feeder... ≤ 7° 0 dBW/4 kHz For 7° 18 dBW/4 kHz For 48° 18 dBW/4 kHz For 48° < θ ≤ 85° −8 dBW/4 kHz For 85° < θ ≤ 180° where θ is defined in...
47 CFR 25.218 - Off-axis EIRP envelopes for FSS earth station operations.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) ESV and VMES applications, (2) Analog video earth station applications, (3) Applications for feeder... ≤ 7° 0 dBW/4 kHz For 7° 18 dBW/4 kHz For 48° 18 dBW/4 kHz For 48° < θ ≤ 85° −8 dBW/4 kHz For 85° < θ ≤ 180° where θ is defined in...
47 CFR 25.218 - Off-axis EIRP envelopes for FSS earth station operations.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) ESV and VMES applications, (2) Analog video earth station applications, (3) Applications for feeder... ≤ 7° 0 dBW/4 kHz For 7° 18 dBW/4 kHz For 48° 18 dBW/4 kHz For 48° < θ ≤ 85° −8 dBW/4 kHz For 85° < θ ≤ 180° where θ is defined in...
1990-12-01
expected values. However, because the same good /bad output pattern of a device always gives rise to the same initial ordering, the method has its limitation...For any device and good /bad output pattern, it is easy to come up with an example on which the method does poorly in the sense that the actual...submodule is hss likely to be faulty if it is connec d to more good primary outputs. Initially, candidates are ordered according to their relat -nships with
Ro, J Y; Capra, N F
2001-05-01
Transient noxious chemical stimulation of small diameter muscle afferents modulates jaw movement-related responses of caudal brainstem neurons. While it is likely that the effect is mediated from the spindle afferents in the mesencephalic nucleus (Vmes) via the caudally projecting Probst's tract, the mechanisms of pain induced modulations of jaw muscle spindle afferents is not known. In the present study, we tested the hypothesis that jaw muscle nociceptors gain access to muscle spindle afferents in the same muscle via central mechanisms and alter their sensitivity. Thirty-five neurons recorded from the Vmes were characterized as muscle spindle afferents based on their responses to passive jaw movements, muscle palpation, and electrical stimulation of the masseter nerve. Each cell was tested by injecting a small volume (250 microl) of either 5% hypertonic and/or isotonic saline into the receptor-bearing muscle. Twenty-nine units were tested with 5% hypertonic saline, of which 79% (23/29) showed significant modulation of mean firing rates (MFRs) during one or more phases of ramp-and-hold movements. Among the muscle spindle primary-like units (n = 12), MFRs of 4 units were facilitated, five reduced, two showed mixed responses and one unchanged. In secondary-like units (n = 17), MFRs of 9 were facilitated, three reduced and five unchanged. Thirteen units were tested with isotonic saline, of which 77% showed no significant changes of MFRs. Further analysis revealed that the hypertonic saline not only affected the overall output of muscle spindle afferents, but also increased the variability of firing and altered the relationship between afferent signal and muscle length. These results demonstrated that activation of muscle nociceptors significantly affects proprioceptive properties of jaw muscle spindles via central neural mechanisms. The changes can have deleterious effects on oral motor function as well as kinesthetic sensibility.
Marine litter in submarine canyons of the Bay of Biscay
NASA Astrophysics Data System (ADS)
van den Beld, Inge M. J.; Guillaumont, Brigitte; Menot, Lénaïck; Bayle, Christophe; Arnaud-Haond, Sophie; Bourillet, Jean-François
2017-11-01
Marine litter is a matter of increasing concern worldwide, from shallow seas to the open ocean and from beaches to the deep-seafloor. Indeed, the deep sea may be the ultimate repository of a large proportion of litter in the ocean. We used footage acquired with a Remotely Operated Vehicle (ROV) and a towed camera to investigate the distribution and composition of litter in the submarine canyons of the Bay of Biscay. This bay contains many submarine canyons housing Vulnerable Marine Ecosystems (VMEs) such as scleractinian coral habitats. VMEs are considered to be important for fish and they increase the local biodiversity. The objectives of the study were to investigate and discuss: (i) litter density, (ii) the principal sources of litter, (iii) the influence of environmental factors on the distribution of litter, and (iv) the impact of litter on benthic communities. Litter was found in all 15 canyons and at three sites on the edge of the continental shelf/canyon, in 25 of 29 dives. The Belle-île and Arcachon Canyons contained the largest amounts of litter, up to 12.6 and 9.5 items per 100 images respectively. Plastic items were the most abundant (42%), followed by fishing-related items (16%). The litter had both a maritime and a terrestrial origin. The main sources could be linked to fishing activities, major shipping lanes and river discharges. Litter appeared to accumulate at water depths of 801-1100 m and 1401-1700 m. In the deeper of these two depth ranges, litter accumulated on a geologically structured area, accounting for its high frequency at this depth. A larger number of images taken in areas of coral in the shallower of these two depth ranges may account for the high frequency of litter detection at this depth. A larger number of litter items, including plastic objects in particular, were observed on geological structures and in coral areas than on areas of bare substratum. The distribution of fishing-related items was similar for the various types of relief. Litter items were mostly colonised by scleractinian corals and hydroids. Several fish species and a lithodid crab seemed to associate with the accumulated litter. This extensive study showed litter to be widely distributed in the submarine canyons of the Bay of Biscay. These findings increase our understanding of the distribution of litter, its composition and accumulation and its impact on benthic communities.
The global distribution of deep-water Antipatharia habitat
NASA Astrophysics Data System (ADS)
Yesson, Chris; Bedford, Faye; Rogers, Alex D.; Taylor, Michelle L.
2017-11-01
Antipatharia are a diverse group of corals with many species found in deep water. Many Antipatharia are habitat for associates, have extreme longevity and some species can occur beyond 8500 m depth. As they are major constituents of'coral gardens', which are Vulnerable Marine Ecosystems (VMEs), knowledge of their distribution and environmental requirements is an important pre-requisite for informed conservation planning particularly where the expense and difficulty of deep-sea sampling prohibits comprehensive surveys. This study uses a global database of Antipatharia distribution data to perform habitat suitability modelling using the Maxent methodology to estimate the global extent of black coral habitat suitability. The model of habitat suitability is driven by temperature but there is notable influence from other variables of topography, surface productivity and oxygen levels. This model can be used to predict areas of suitable habitat, which can be useful for conservation planning. The global distribution of Antipatharia habitat suitability shows a marked contrast with the distribution of specimen observations, indicating that many potentially suitable areas have not been sampled, and that sampling effort has been disproportionate to shallow, accessible areas inside marine protected areas (MPAs). Although 25% of Antipatharia observations are located in MPAs, only 7-8% of predicted suitable habitat is protected, which is short of the Convention on Biological Diversity target to protect 10% of ocean habitats by 2020.
Tfelt-Hansen, Peer
2015-03-01
There are two types of errors when references are used in the scientific literature: citation errors and quotation errors, and these errors have in reviews mainly been evaluated quantitatively. Quotation errors are the major problem, and 1 review reported 6% major quotation errors. The objective of this listing of quotation errors is to illustrate by qualitative analysis of different types of 10 major quotation errors how and possibly why authors misquote references. The author selected for review the first 10 different consecutive major quotation errors encountered from his reading of the headache literature. The characteristics of the 10 quotation errors ranged considerably. Thus, in a review of migraine therapy in a very prestigious medical journal, the superiority of a new treatment (sumatriptan) vs an old treatment (aspirin plus metoclopramide) was claimed despite no significant difference for the primary efficacy measure in the trial. One author, in a scientific debate, referred to the lack of dilation of the middle meningeal artery in spontaneous migraine despite the fact that only 1 migraine attack was studied. The possibility for creative major quotation errors in the medical literature is most likely infinite. Qualitative evaluations, as the present, of major quotation errors will hopefully result in more general awareness of quotation problems in the medical literature. Even if the final responsibility for correct use of quotations is with the authors, the referees, the experts with the knowledge needed to spot quotation errors, should be more involved in ensuring correct and fair use of references. Finally, this paper suggests that major misleading quotations, if pointed out by readers of the journal, should, as a rule, be corrected by way of an erratum statement. © 2015 American Headache Society.
Neural evidence for enhanced error detection in major depressive disorder.
Chiu, Pearl H; Deldin, Patricia J
2007-04-01
Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Diffraction analysis of sidelobe characteristics of optical elements with ripple error
NASA Astrophysics Data System (ADS)
Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie
2018-03-01
The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.
2011-10-01
Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA
Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M
2016-12-01
Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.
Death Certification Errors and the Effect on Mortality Statistics.
McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth
Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.
Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell
2016-01-01
We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.
Disclosing Medical Errors to Patients: Attitudes and Practices of Physicians and Trainees
Jones, Elizabeth W.; Wu, Barry J.; Forman-Hoffman, Valerie L.; Levi, Benjamin H.; Rosenthal, Gary E.
2007-01-01
BACKGROUND Disclosing errors to patients is an important part of patient care, but the prevalence of disclosure, and factors affecting it, are poorly understood. OBJECTIVE To survey physicians and trainees about their practices and attitudes regarding error disclosure to patients. DESIGN AND PARTICIPANTS Survey of faculty physicians, resident physicians, and medical students in Midwest, Mid-Atlantic, and Northeast regions of the United States. MEASUREMENTS Actual error disclosure; hypothetical error disclosure; attitudes toward disclosure; demographic factors. RESULTS Responses were received from 538 participants (response rate = 77%). Almost all faculty and residents responded that they would disclose a hypothetical error resulting in minor (97%) or major (93%) harm to a patient. However, only 41% of faculty and residents had disclosed an actual minor error (resulting in prolonged treatment or discomfort), and only 5% had disclosed an actual major error (resulting in disability or death). Moreover, 19% acknowledged not disclosing an actual minor error and 4% acknowledged not disclosing an actual major error. Experience with malpractice litigation was not associated with less actual or hypothetical error disclosure. Faculty were more likely than residents and students to disclose a hypothetical error and less concerned about possible negative consequences of disclosure. Several attitudes were associated with greater likelihood of hypothetical disclosure, including the belief that disclosure is right even if it comes at a significant personal cost. CONCLUSIONS There appears to be a gap between physicians’ attitudes and practices regarding error disclosure. Willingness to disclose errors was associated with higher training level and a variety of patient-centered attitudes, and it was not lessened by previous exposure to malpractice litigation. PMID:17473944
Majority-voted logic fail-sense circuit
NASA Technical Reports Server (NTRS)
Mclyman, W. T.
1977-01-01
Fail-sense circuit has majority-voted logic component which receives three error voltage signals that are sensed at single point by three error amplifiers. If transistor shorts, only one signal is required to operate; if transistor opens, two signals are required.
Sources of error in the retracted scientific literature.
Casadevall, Arturo; Steen, R Grant; Fang, Ferric C
2014-09-01
Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.
Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.
MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A
2011-03-01
Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
Relating physician's workload with errors during radiation therapy planning.
Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B
2014-01-01
To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.
Spelling Errors of Dyslexic Children in Bosnian Language With Transparent Orthography.
Duranović, Mirela
The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors,10% of orthographic errors, and 4% of grammatical errors. Furthermore, the majority errors were the omissions and substitutions, followed by the insertions, omission of rules of assimilation by voicing, and errors with utilization of suffix. We can conclude that phonological errors were dominant in children with dyslexia at all grade levels.
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Car[emter. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Carpenter, J. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.
Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp
2017-04-01
According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.
ERIC Educational Resources Information Center
Sheldon, Deborah A.
1998-01-01
Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Understanding EFL Students' Errors in Writing
ERIC Educational Resources Information Center
Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti
2015-01-01
Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…
Association of resident fatigue and distress with perceived medical errors.
West, Colin P; Tan, Angelina D; Habermann, Thomas M; Sloan, Jeff A; Shanafelt, Tait D
2009-09-23
Fatigue and distress have been separately shown to be associated with medical errors. The contribution of each factor when assessed simultaneously is unknown. To determine the association of fatigue and distress with self-perceived major medical errors among resident physicians using validated metrics. Prospective longitudinal cohort study of categorical and preliminary internal medicine residents at Mayo Clinic, Rochester, Minnesota. Data were provided by 380 of 430 eligible residents (88.3%). Participants began training from 2003 to 2008 and completed surveys quarterly through February 2009. Surveys included self-assessment of medical errors, linear analog self-assessment of overall quality of life (QOL) and fatigue, the Maslach Burnout Inventory, the PRIME-MD depression screening instrument, and the Epworth Sleepiness Scale. Frequency of self-perceived, self-defined major medical errors was recorded. Associations of fatigue, QOL, burnout, and symptoms of depression with a subsequently reported major medical error were determined using generalized estimating equations for repeated measures. The mean response rate to individual surveys was 67.5%. Of the 356 participants providing error data (93.7%), 139 (39%) reported making at least 1 major medical error during the study period. In univariate analyses, there was an association of subsequent self-reported error with the Epworth Sleepiness Scale score (odds ratio [OR], 1.10 per unit increase; 95% confidence interval [CI], 1.03-1.16; P = .002) and fatigue score (OR, 1.14 per unit increase; 95% CI, 1.08-1.21; P < .001). Subsequent error was also associated with burnout (ORs per 1-unit change: depersonalization OR, 1.09; 95% CI, 1.05-1.12; P < .001; emotional exhaustion OR, 1.06; 95% CI, 1.04-1.08; P < .001; lower personal accomplishment OR, 0.94; 95% CI, 0.92-0.97; P < .001), a positive depression screen (OR, 2.56; 95% CI, 1.76-3.72; P < .001), and overall QOL (OR, 0.84 per unit increase; 95% CI, 0.79-0.91; P < .001). Fatigue and distress variables remained statistically significant when modeled together with little change in the point estimates of effect. Sleepiness and distress, when modeled together, showed little change in point estimates of effect, but sleepiness no longer had a statistically significant association with errors when adjusted for burnout or depression. Among internal medicine residents, higher levels of fatigue and distress are independently associated with self-perceived medical errors.
de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro
2004-01-01
In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523
Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua
2018-01-01
Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.
Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A
2007-11-01
To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.
An Analysis of Errors in Written English Sentences: A Case Study of Thai EFL Students
ERIC Educational Resources Information Center
Sermsook, Kanyakorn; Liamnimit, Jiraporn; Pochakorn, Rattaneekorn
2017-01-01
The purposes of the present study were to examine the language errors in a writing of English major students in a Thai university and to explore the sources of the errors. This study focused mainly on sentences because the researcher found that errors in Thai EFL students' sentence construction may lead to miscommunication. 104 pieces of writing…
A Typology of Errors and Myths Perpetuated in Educational Research Textbooks
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Leech, Nancy L.
2005-01-01
This paper identifies major errors and myths perpetuated by educational research textbooks. The most pervasive errors and myths advanced by methodology textbooks at the following eight phases of the educational research process are described: (a) formulating a research problem/objective; (b) reviewing the literature; (c) developing the research…
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
CHROMagar Candida Medium for Direct Susceptibility Testing of Yeast from Blood Cultures
Tan, Grace L.; Peterson, Ellena M.
2005-01-01
An evaluation was performed on 95 blood cultures positive for Candida spp. to determine the correlation of direct susceptibility testing of fluconazole versus both standardized disk diffusion and MIC methods. For direct testing, an aliquot taken from BD BACTEC Plus and/or BD BACTEC Lytic/10 bottles (Becton Dickinson [BD], Sparks, MD) positive by gram stain for yeast was subcultured to CHROMagar Candida (BD), and a 25-μg fluconazole disk (BD) was placed on the plate. The area of growth inhibition surrounding the disk was measured at 24 and 48 h. In addition, a subculture of the isolate was tested by a microdilution MIC using YeastOne (TREK Diagnostics Systems Inc., OH) and disk diffusion (NCCLS M44-A) using a standardized inoculum plated onto CHROMagar Candida as well as Mueller-Hinton agar to which 2% glucose and 0.5 μg/ml methylene blue dye was added (MH-GMB). The categorical interpretation derived from the MIC was used as the reference to which the disk diffusion results were compared. There were a total of 41 Candida albicans, 23 Candida glabrata, 20 Candida parapsilosis, 9 Candida tropicalis, and 1 each of Candida krusei and Candida lusitaniae tested. At 24 h there was full agreement among the methods for all C. albicans, C. tropicalis, C. lusitaniae, and C. krusei isolates. For the C. parapsilosis isolates at 24 h there was one very major discrepancy using the direct CHROMagar and one major error with the standardized MH-GMB. The majority of the errors were seen at 24 h with the C. glabrata isolates. Of the 23 C. glabrata isolates at 24 h by direct CHROMagar, there were 10 minor and 1 very major error; by MH-GMB there were 12 minor and 2 very major errors; and by standardized CHROMagar Candida there were 13 minor and 2 major errors. There were no very major errors with C. glabrata when all plates were read at 48 h. At 24 h by the direct and standardized CHROMagar the majority of C. glabrata isolates were more resistant, whereas by MH-GMB they were more susceptible than the reference MIC interpretation. In summary, subculturing yeast directly from blood cultures onto CHROMagar to which a fluconazole disk has been added may provide a presumptive identification at 24 h and, with the exception of C. glabrata, was able to predict the susceptibility to fluconazole with the majority of Candida isolates examined in this evaluation. PMID:15814992
Wang, Peng; Bowler, Sarah L; Kantz, Serena F; Mettus, Roberta T; Guo, Yan; McElheny, Christi L; Doi, Yohei
2016-12-01
Treatment options for infections due to carbapenem-resistant Acinetobacter baumannii are extremely limited. Minocycline is a semisynthetic tetracycline derivative with activity against this pathogen. This study compared susceptibility testing methods that are used in clinical microbiology laboratories (Etest, disk diffusion, and Sensititre broth microdilution methods) for testing of minocycline, tigecycline, and doxycycline against 107 carbapenem-resistant A. baumannii clinical isolates. Susceptibility rates determined with the standard broth microdilution method using cation-adjusted Mueller-Hinton (MH) broth were 77.6% for minocycline and 29% for doxycycline, and 92.5% of isolates had tigecycline MICs of ≤2 μg/ml. Using MH agar from BD and Oxoid, susceptibility rates determined with the Etest method were 67.3% and 52.3% for minocycline, 21.5% and 18.7% for doxycycline, and 71% and 29.9% for tigecycline, respectively. With the disk diffusion method using MH agar from BD and Oxoid, susceptibility rates were 82.2% and 72.9% for minocycline and 34.6% and 34.6% for doxycycline, respectively, and rates of MICs of ≤2 μg/ml were 46.7% and 23.4% for tigecycline. In comparison with the standard broth microdilution results, very major rates were low (∼2.8%) for all three drugs across the methods, but major error rates were higher (∼5.6%), especially with the Etest method. For minocycline, minor error rates ranged from 14% to 37.4%. For tigecycline, minor error rates ranged from 6.5% to 69.2%. The majority of minor errors were due to susceptible results being reported as intermediate. For minocycline susceptibility testing of carbapenem-resistant A. baumannii strains, very major errors are rare, but major and minor errors overcalling strains as intermediate or resistant occur frequently with susceptibility testing methods that are feasible in clinical laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
ERIC Educational Resources Information Center
Nuruzzaman, Mohammed; Islam, A. B. M. Shafiqul; Shuchi, Israt Jahan
2018-01-01
The present study investigates the writing errors of ninety Saudi non-English major undergraduate students of different proficiency levels from three faculties, who studied English as a foundation course at the English Language Center in the College of Languages &Translation at King Khalid University, Saudi Arabia in the academic year 2016-17.…
Differences in Error Detection Skills by Band and Choral Preservice Teachers
ERIC Educational Resources Information Center
Stambaugh, Laura A.
2016-01-01
Band and choral preservice teachers (N = 44) studied band and choral scores, listened to recordings of school ensembles, and identified errors in the recordings. Results indicated that preservice teachers identified significantly more errors when listening to recordings of their primary area (band majors listening to band, p = 0.045; choral majors…
Avoiding Substantive Errors in Individualized Education Program Development
ERIC Educational Resources Information Center
Yell, Mitchell L.; Katsiyannis, Antonis; Ennis, Robin Parks; Losinski, Mickey; Christle, Christine A.
2016-01-01
The purpose of this article is to discuss major substantive errors that school personnel may make when developing students' Individualized Education Programs (IEPs). School IEP team members need to understand the importance of the procedural and substantive requirements of the IEP, have an awareness of the five serious substantive errors that IEP…
Pattern of refractive errors among the Nepalese population: a retrospective study.
Shrestha, S P; Bhat, K S; Binu, V S; Barthakur, R; Natarajan, M; Subba, S H
2010-01-01
Refractive errors are a major cause of visual impairment in the population. To find the pattern of refractive errors among patients evaluated in a tertiary care hospital in the western region of Nepal. The present hospital-based retrospective study was conducted in the Department of Ophthalmology of the Manipal Teaching Hospital, situated in Pokhara, Nepal. Patients who had refractive error of at least 0.5 D (dioptre) were included for the study. During the study period, 15,410 patients attended the outpatient department and 10.8% of the patients were identified as having refractive error. The age of the patients in the present study ranged between 5 - 90 years. Myopia was the commonest refractive error followed by hypermetropia. There was no difference in the frequency of the type of refractive errors when they were defined using right the eye, the left eye or both the eyes. Males predominated among myopics and females predominated among hypermetropics. The majority of spherical errors was less than or equal to 2 D. Astigmatic power above 1D was rarely seen with hypermetropic astigmatism and was seen in around 13 % with myopic astigmatism. "Astigmatism against the rule" was more common than "astigmatism with the rule", irrespective of age. Refractive errors progressively shift along myopia up to the third decade and change to hypermetropia till the seventh decade. Hyperopic shift in the refractive error in young adults should be well noted while planning any refractive surgery in younger patients with myopia. © Nepal Ophthalmic Society.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
Aljasmi, Fatema; Almalood, Fatema
2018-01-01
Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304
ERIC Educational Resources Information Center
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
The Effect of Divided Attention on Inhibiting the Gravity Error
ERIC Educational Resources Information Center
Hood, Bruce M.; Wilson, Alice; Dyson, Sally
2006-01-01
Children who could overcome the gravity error on Hood's (1995) tubes task were tested in a condition where they had to monitor two falling balls. This condition significantly impaired search performance with the majority of mistakes being gravity errors. In a second experiment, the effect of monitoring two balls was compared in the tubes task and…
Boyanova, Lyudmila; Ilieva, Juliana; Gergova, Galina; Mitov, Ivan
2016-01-01
We compared levofloxacin (1 μg/disk) disk diffusion method to E test against 212 Helicobacter pylori strains. Using diameter breakpoints for susceptibility (≥15 mm) and resistance (≤9 mm), very major error, major error rate, and categoric agreement were 0.0%, 0.6%, and 93.9%, respectively. The method may be useful in low-resource laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric
Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
Detection of Common Errors in Turkish EFL Students' Writing through a Corpus Analytic Approach
ERIC Educational Resources Information Center
Demirel, Elif Tokdemir
2017-01-01
The present study aims to explore Turkish EFL students' major writing difficulties by analyzing the frequent writing errors in academic essays. Accordingly, the study examined errors in a corpus of 150 academic essays written by Turkish EFL students studying at the Department of English Language and Literature at a public university in Turkey. The…
The Effects of Non-Normality on Type III Error for Comparing Independent Means
ERIC Educational Resources Information Center
Mendes, Mehmet
2007-01-01
The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…
Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database
NASA Technical Reports Server (NTRS)
Houlden, Heather P.; Favaregh, Amber L.
2010-01-01
A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.
Patient safety in otolaryngology: a descriptive review.
Danino, Julian; Muzaffar, Jameel; Metcalfe, Chris; Coulson, Chris
2017-03-01
Human evaluation and judgement may include errors that can have disastrous results. Within medicine and healthcare there has been slow progress towards major changes in safety. Healthcare lags behind other specialised industries, such as aviation and nuclear power, where there have been significant improvements in overall safety, especially in reducing risk of errors. Following several high profile cases in the USA during the 1990s, a report titled "To Err Is Human: Building a Safer Health System" was published. The report extrapolated that in the USA approximately 50,000 to 100,000 patients may die each year as a result of medical errors. Traditionally otolaryngology has always been regarded as a "safe specialty". A study in the USA in 2004 inferred that there may be 2600 cases of major morbidity and 165 deaths within the specialty. MEDLINE via PubMed interface was searched for English language articles published between 2000 and 2012. Each combined two or three of the keywords noted earlier. Limitations are related to several generic topics within patient safety in otolaryngology. Other areas covered have been current relevant topics due to recent interest or new advances in technology. There has been a heightened awareness within the healthcare community of patient safety; it has become a major priority. Focus has shifted from apportioning blame to prevention of the errors and implementation of patient safety mechanisms in healthcare delivery. Type of Errors can be divided into errors due to action and errors due to knowledge or planning. In healthcare there are several factors that may influence adverse events and patient safety. Although technology may improve patient safety, it also introduces new sources of error. The ability to work with people allows for the increase in safety netting. Team working has been shown to have a beneficial effect on patient safety. Any field of work involving human decision-making will always have a risk of error. Within Otolaryngology, although patient safety has evolved along similar themes as other surgical specialties; there are several specific high-risk areas. Medical error is a common problem and its human cost is of immense importance. Steps to reduce such errors require the identification of high-risk practice within a complex healthcare system. The commitment to patient safety and quality improvement in medicine depend on personal responsibility and professional accountability.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
Accuracy of references and quotations in veterinary journals.
Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L
1993-02-01
The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.
Atmospheric density determination using high-accuracy satellite GPS data
NASA Astrophysics Data System (ADS)
Tingling, R.; Miao, J.; Liu, S.
2017-12-01
Atmospheric drag is the main error source in the orbit determination and prediction of low Earth orbit (LEO) satellites, however, empirical models which are used to account for atmosphere often exhibit density errors around 15 30%. Atmospheric density determination thus become an important topic for atmospheric researchers. Based on the relation between atmospheric drag force and the decay of orbit semi-major axis, we derived atmospheric density along the trajectory of CHAMP with its Rapid Science Orbit (RSO) data. Three primary parameters are calculated, including the ratio of cross sectional area to mass, drag coefficient, and the decay of semi-major axis caused by atmospheric drag. We also analyzed the source of error and made a comparison between GPS-derived and reference density. Result on 2 Dec 2008 shows that the mean error of GPS-derived density can decrease from 29.21% to 9.20% when time span adopted on the process of computation increase from 10min to 50min. Result for the whole December indicates that when the time span meet the condition that the amplitude of the decay of semi-major axis is much greater than its standard deviation, then density precision of 10% can be achieved.
Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A
2011-07-01
Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
ERIC Educational Resources Information Center
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
ERIC Educational Resources Information Center
Alhaisoni, Eid M.; Al-Zuoud, Khalid M.; Gaudel, Daya Ram
2015-01-01
This study reports the types of spelling errors made by the beginner learners of English in the EFL context as well as the major sources underpinning such errors in contextual writing composition tasks. Data were collected from written samples of 122 EFL students (male and female) enrolled in the intensive English language programme during the…
Uncorrected and corrected refractive error experiences of Nepalese adults: a qualitative study.
Kandel, Himal; Khadka, Jyoti; Shrestha, Mohan Krishna; Sharma, Sadhana; Neupane Kandel, Sandhya; Dhungana, Purushottam; Pradhan, Kishore; Nepal, Bhagavat P; Thapa, Suman; Pesudovs, Konrad
2018-04-01
The aim of this study was to explore the impact of corrected and uncorrected refractive error (URE) on Nepalese people's quality of life (QoL), and to compare the QoL status between refractive error subgroups. Participants were recruited from Tilganga Institute of Ophthalmology and Dhulikhel Hospital, Nepal. Semi-structured in-depth interviews were conducted with 101 people with refractive error. Thematic analysis was used with matrices produced to compare the occurrence of themes and categories across participants. Themes were identified using an inductive approach. Seven major themes emerged that determined refractive error-specific QoL: activity limitation, inconvenience, health concerns, psycho-social impact, economic impact, general and ocular comfort symptoms, and visual symptoms. Activity limitation, economic impact, and symptoms were the most important themes for the participants with URE, whereas inconvenience associated with wearing glasses was the most important issue in glasses wearers. Similarly, possibilities of having side effects or complications were the major concerns for participants wearing contact lens. In general, refractive surgery addressed socio-emotional impact of wearing glasses or contact lens. However, the surgery participants had concerns such as possibility of having to wear glasses again due to relapse of refractive error. Impact of refractive error on people's QoL is multifaceted. Significance of the identified themes varies by refractive error subgroups. Refractive correction may not always address QoL impact of URE but often add unique QoL issues. This study findings also provide content for developing an item-bank for quantitatively measuring refractive error-specific QoL in developing country setting.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff
2005-10-05
The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
DOT National Transportation Integrated Search
2012-10-22
Recent accidents in commuter rail operations and analyses of rule violations have highlighted the need for : better understanding of the contributory role of distraction and attentional errors. Distracted driving has : thoroughly been studied in rece...
Hatcher, Irene; Sullivan, Mark; Hutchinson, James; Thurman, Susan; Gaffney, F Andrew
2004-10-01
Improving medication safety at the point of care--particularly for high-risk drugs--is a major concern of nursing administrators. The medication errors most likely to cause harm are administration errors related to infusion of high-risk medications. An intravenous medication safety system is designed to prevent high-risk infusion medication errors and to capture continuous quality improvement data for best practice improvement. Initial testing with 50 systems in 2 units at Vanderbilt University Medical Center revealed that, even in the presence of a fully mature computerized prescriber order-entry system, the new safety system averted 99 potential infusion errors in 8 months.
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Detection of Methicillin-Resistant Coagulase-Negative Staphylococci by the Vitek 2 System
Johnson, Kristen N.; Andreacchio, Kathleen
2014-01-01
The accurate performance of the Vitek 2 GP67 card for detecting methicillin-resistant coagulase-negative staphylococci (CoNS) is not known. We prospectively determined the ability of the Vitek 2 GP67 card to accurately detect methicillin-resistant CoNS, with mecA PCR results used as the gold standard for a 4-month period in 2012. Included in the study were 240 consecutively collected nonduplicate CoNS isolates. Cefoxitin susceptibility by disk diffusion testing was determined for all isolates. We found that the three tested systems, Vitek 2 oxacillin and cefoxitin testing and cefoxitin disk susceptibility testing, lacked specificity and, in some cases, sensitivity for detecting methicillin resistance. The Vitek 2 oxacillin and cefoxitin tests had very major error rates of 4% and 8%, respectively, and major error rates of 38% and 26%, respectively. Disk cefoxitin testing gave the best performance, with very major and major error rates of 2% and 24%, respectively. The test performances were species dependent, with the greatest errors found for Staphylococcus saprophyticus. While the 2014 CLSI guidelines recommend reporting isolates that test resistant by the oxacillin MIC or cefoxitin disk test as oxacillin resistant, following such guidelines produces erroneous results, depending on the test method and bacterial species tested. Vitek 2 cefoxitin testing is not an adequate substitute for cefoxitin disk testing. For critical-source isolates, mecA PCR, rather than Vitek 2 or cefoxitin disk testing, is required for optimal antimicrobial therapy. PMID:24951799
NASA Astrophysics Data System (ADS)
Lee, Minho; Cho, Nahm-Gyoo
2013-09-01
A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.
NASA Astrophysics Data System (ADS)
Yamazaki, D.; Ikeshima, D.; Neal, J. C.; O'Loughlin, F.; Sampson, C. C.; Kanae, S.; Bates, P. D.
2017-12-01
Digital Elevation Models (DEM) are fundamental data for flood modelling. While precise airborne DEMs are available in developed regions, most parts of the world rely on spaceborne DEMs which include non-negligible height errors. Here we show the most accurate global DEM to date at 90m resolution by eliminating major error components from the SRTM and AW3D DEMs. Using multiple satellite data and multiple filtering techniques, we addressed absolute bias, stripe noise, speckle noise and tree height bias from spaceborne DEMs. After the error removal, significant improvements were found in flat regions where height errors were larger than topography variability, and landscapes features such as river networks and hill-valley structures became clearly represented. We found the topography slope of the previous DEMs was largely distorted in most of world major floodplains (e.g. Ganges, Nile, Niger, Mekong) and swamp forests (e.g. Amazon, Congo, Vasyugan). The developed DEM will largely reduce the uncertainty in both global and regional flood modelling.
Social aspects of clinical errors.
Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave
2009-08-01
Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Realtime mitigation of GPS SA errors using Loran-C
NASA Technical Reports Server (NTRS)
Braasch, Soo Y.
1994-01-01
The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.
Managerial process improvement: a lean approach to eliminating medication delivery.
Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George
2015-01-01
Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.
Kedir, Jafer; Girma, Abonesh
2014-10-01
Refractive error is one of the major causes of blindness and visual impairment in children; but community based studies are scarce especially in rural parts of Ethiopia. So, this study aims to assess the prevalence of refractive error and its magnitude as a cause of visual impairment among school-age children of rural community. This community-based cross-sectional descriptive study was conducted from March 1 to April 30, 2009 in rural villages of Goro district of Gurage Zone, found south west of Addis Ababa, the capital of Ethiopia. A multistage cluster sampling method was used with simple random selection of representative villages in the district. Chi-Square and t-tests were used in the data analysis. A total of 570 school-age children (age 7-15) were evaluated, 54% boys and 46% girls. The prevalence of refractive error was 3.5% (myopia 2.6% and hyperopia 0.9%). Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing corrective spectacles during the study period. Refractive error was the commonest cause of visual impairment in children of the district, but no measures were taken to reduce the burden in the community. So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised to provide low cost corrective spectacles in the rural community.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Rewriting evolution--"been there, done that".
Penny, David
2013-01-01
A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents.
Amplify Errors to Minimize Them
ERIC Educational Resources Information Center
Stewart, Maria Shine
2009-01-01
In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Educational agenda for diagnostic error reduction
Trowbridge, Robert L; Dhaliwal, Gurpreet; Cosby, Karen S
2013-01-01
Diagnostic errors are a major patient safety concern. Although the majority of diagnostic errors are partially attributable to cognitive mistakes, the most effective means of improving clinician cognition in order to achieve gains in diagnostic reliability are unclear. We propose a tripartite educational agenda for improving diagnostic performance among students, residents and practising physicians. This agenda includes strengthening the metacognitive abilities of clinicians, fostering intuitive reasoning and increasing awareness of the role of systems in the diagnostic process. The evidence supporting initiatives in each of these realms is reviewed and a course of future implementation and study is proposed. The barriers to designing and implementing this agenda are substantial and include limited evidence supporting these initiatives and the challenges of changing the practice patterns of practising physicians. Implementation will need to be accompanied by rigorous evaluation. PMID:23764435
A simulation of GPS and differential GPS sensors
NASA Technical Reports Server (NTRS)
Rankin, James M.
1993-01-01
The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Astigmatism following retinal detachment surgery.
Goel, R; Crewdson, J; Chignell, A H
1983-01-01
Eighty-three patients on whom successful retinal detachment had been performed were studied to note astigmatic changes following surgery. In the majority of cases the errors following such surgery are of no great clinical importance. However, in some situations a high degree of astigmatism may be produced. This study showed that these sequelae are particularly likely after radial buckling procedures, and surgeons favouring these techniques should be aware that astigmatic errors can be induced. The astigmatic errors may persist for several years after surgery. PMID:6838807
Smiley, A M
1990-10-01
In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.
NASA Technical Reports Server (NTRS)
Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.
2016-01-01
A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.
The presence of English and Spanish dyslexia in the Web
NASA Astrophysics Data System (ADS)
Rello, Luz; Baeza-Yates, Ricardo
2012-09-01
In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.
Analysis of frequency mixing error on heterodyne interferometric ellipsometry
NASA Astrophysics Data System (ADS)
Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan
2007-11-01
A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.
Linguistic Error Analysis on Students' Thesis Proposals
ERIC Educational Resources Information Center
Pescante-Malimas, Mary Ann; Samson, Sonrisa C.
2017-01-01
This study identified and analyzed the common linguistic errors encountered by Linguistics, Literature, and Advertising Arts majors in their Thesis Proposal classes in the First Semester 2016-2017. The data were the drafts of the thesis proposals of the students from the three different programs. A total of 32 manuscripts were analyzed which was…
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
1980-03-01
interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of
Rewriting Evolution—“Been There, Done That”
Penny, David
2013-01-01
A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents. PMID:23558594
Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Liu, Lin; Miao, Manqian
Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.
Quality of death notification forms in North West Bank/Palestine: a descriptive study.
Qaddumi, Jamal A S; Nazzal, Zaher; Yacoup, Allam R S; Mansour, Mahmoud
2017-04-11
The death notification forms (DNFs) are important documents. Thus, inability to fill it properly by physicians will affect the national mortality report and, consequently, the evidence-based decision making. The errors in filling DNFs are common all over the world and are different in types and causes. We aimed to evaluate the quality of DNFs in terms of completeness and types of errors in the cause of death section. A descriptive study was conducted to review 2707 DNFs in North West Bank/Palestine during the year 2012 using data abstraction sheets. SPSS 17.0 was used to show the frequency of major and minor errors committed in filling the DNFs. Surprisingly, only 1% of the examined DNFs had their cause of death section filled completely correct. The immediate cause of death was correctly identified in 5.9% of all DNFs and the underlying cause of death was correctly reported in 55.4% of them. The sequence was incorrect in 41.5% of the DNFs. The most frequently documented minor error was "Not writing Time intervals" error (97.0%). Almost all DNFs contained at least one minor or major error. This high percentage of errors may affect the mortality and morbidity statistics, public health research and the process of providing evidence for health policy. Training workshops on DNF completion for newly recruited employees and at the beginning of the residency program are recommended on a regular basis. As well, we recommend reviewing the national DNFs to simplify it and make it consistent with updated evidence-based guidelines and recommendation.
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
Increased User Satisfaction Through an Improved Message System
NASA Technical Reports Server (NTRS)
Weissert, C. L.
1997-01-01
With all of the enhancements in software methodology and testing, there is no guarantee that software can be delivered such that no user errors occur, How to handle these errors when they occur has become a major research topic within human-computer interaction (HCI). Users of the Multimission Spacecraft Analysis Subsystem(MSAS) at the Jet Propulsion Laboratory (JPL), a system of X and motif graphical user interfaces for analyzing spacecraft data, complained about the lack of information about the error cause and have suggested that recovery actions be included in the system error messages...The system was evaluated through usability surveys and was shown to be successful.
Anomalous annealing of floating gate errors due to heavy ion irradiation
NASA Astrophysics Data System (ADS)
Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong
2018-03-01
Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.
Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.
2016-01-01
Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward identifying potential biosignatures of treatment response. PMID:26183698
Greenberg, Tsafrir; Chase, Henry W; Almeida, Jorge R; Stiffler, Richelle; Zevallos, Carlos R; Aslam, Haris A; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G; Oquendo, Maria A; McGrath, Patrick J; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H; Phillips, Mary L
2015-09-01
Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error- (discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward identifying potential biosignatures of treatment response.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao
2014-01-01
While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253
Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.
Kunac, Desireé L; Tatley, Michael V
2011-01-01
Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and administration stages of the medication use process, with the majority of errors 82.0% (50/61) deemed to have originated in the community setting. The CARM pharmacovigilance database includes medication errors, many of which were found to originate in the community setting and reported as ADRs. Error-prone situations were able to be identified, providing greater opportunity to improve patient safety. However, to enhance detection of medication errors by pharmacovigilance centres, reports should be prospectively reviewed for preventability and the reporting form revised to facilitate capture of important information that will provide meaningful insight into the nature of the underlying systems defects that caused the error.
[Analysis of intrusion errors in free recall].
Diesfeldt, H F A
2017-06-01
Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.
Pupils' Error on the Concept of Reversibility in Solving Arithmetic Problems
ERIC Educational Resources Information Center
Maf'ulah, Syarifatul; Juniati, Dwi; Siswono, Tatag Yuli Eko
2016-01-01
The fact that there is no much study on reversibility is one of reason this study was conducted. Others, the importance of reversibility is also being researcher's motivation for focusing pupils' reversibility. On the other hand, the concern on pupils' reversibility is a major concern. The objective of this research is to identify errors done by…
Early Reading Strategies in Irish and English: Evidence from Error Types
ERIC Educational Resources Information Center
Parsons, Christine E.; Lyddy, Fiona
2009-01-01
For the majority of people in Ireland, Irish is a second language acquired primarily through the schooling system. This study examined the reading strategies children used in response to English and Irish words (presented in isolation), through an analysis of their oral reading errors. Children in their 4th year of schooling attending…
Heuristics and Cognitive Error in Medical Imaging.
Itri, Jason N; Patel, Sohil H
2018-05-01
The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
NASA Astrophysics Data System (ADS)
Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji
Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System
NASA Astrophysics Data System (ADS)
Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai
2017-12-01
The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.
Correcting pervasive errors in RNA crystallography through enumerative structure prediction.
Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju
2013-01-01
Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.
Quality assessment and control of finite element solutions
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Babuska, Ivo
1987-01-01
Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.
SUS Source Level Error Analysis
1978-01-20
RIECIP1IEN’ CATALOG NUMBER * ITLE (and SubaltIe) S. TYP aof REPORT & _V9RCO SUS~ SOURCE LEVEL ERROR ANALYSIS & Fia 1.r,. -. pAURWORONTIUMm N (s)$S...Fourier Transform (FFTl) SUS Signal model ___ 10 TRA&C (CeEOINIMII1& ro"* *140O tidat n9#*#*Y a"d 0e~ntiff 6T 69*.4 apbt The report provides an analysis ...of major terms which contribute to signal analysis error in a proposed experiment to c-librate sourr - I levels of SUS (Signal Underwater Sound). A
A cognitive taxonomy of medical errors.
Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H
2004-06-01
Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Effect of Bearing Dynamic Stiffness on Gear Vibration
NASA Technical Reports Server (NTRS)
Fleming, David P.
2002-01-01
Noise is a major consideration in the design of high performance geared transmissions, such as for helicopters. Transmission error, that is, the accuracy with which the driven gear follows the driver gear, is a common indicator of noise generation. It is well known that bearing properties have a strong influence on shaft dynamics. However, up to now the contribution of bearings to transmission error has received little attention. In this paper, a torsional-axial-lateral geared rotor analysis is used to determine dynamic transmission error as a function of bearing stiffness and damping. Bearings have a similar effect as found in shaft dynamics; transmission error can be reduced more than 10 decibels by appropriate selection of bearing properties.
F-16 Class A mishaps in the U.S. Air Force, 1975-93.
Knapp, C J; Johnson, R
1996-08-01
All USAF F-16 fighter Class A (major) aircraft mishaps from 1975-93 were analyzed, using records from the U.S. Air Force Safety Agency (AFSA). There were 190 Class A mishaps involving 204 F-16's and 217 aircrew during this 19-yr period. The overall Class A rate was 5.09 per 100,000 flight hours, more than double the overall USAF rate. The mishaps are categorized by year, month, time of day and model of aircraft in relation to mishap causes as determined and reported by AFSA. Formation position, phase of flight and primary cause of the mishap indicate that maneuvering, cruise and low-level phases account for the majority of the mishaps (71%), with air-to-air engagements associated with a higher proportion of pilot error (71%) than was air-to-ground (49%). Engine failure was the number one cause of mishaps (35%), and collision with the ground the next most frequent (24%). Pilot error was determined as causative in 55% of all the mishaps. Pilot error was often associated with other non-pilot related causes. Channelized attention, loss of situational awareness, and spatial disorientation accounted for approximately 30% of the total pilot error causes found. Pilot demographics, flight hour/sortie profiles, and aircrew injuries are also listed. Fatalities occurred in 27% of the mishaps, with 97% of those involving pilot errors.
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
Science support for the Earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.
1994-01-01
The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.
Ligozzi, Marco; Bernini, Cinzia; Bonora, Maria Grazia; de Fatima, Maria; Zuliani, Jessica; Fontana, Roberta
2002-01-01
A study was conducted to evaluate the new VITEK 2 system (bioMérieux) for identification and antibiotic susceptibility testing of gram-positive cocci. Clinical isolates of Staphylococcus aureus (n = 100), coagulase-negative staphylococci (CNS) (n = 100), Enterococcus spp. (n = 89), Streptococcus agalactiae (n = 29), and Streptococcus pneumoniae (n = 66) were examined with the ID-GPC identification card and with the AST-P515 (for staphylococci), AST-P516 (for enterococci and S. agalactiae) and AST-P506 (for pneumococci) susceptibility cards. The identification comparison methods were the API Staph for staphylococci and the API 20 Strep for streptococci and enterococci; for antimicrobial susceptibility testing, the agar dilution method according to the procedure of the National Committee for Clinical Laboratory Standards (NCCLS) was used. The VITEK 2 system correctly identified to the species level (only one choice or after simple supplementary tests) 99% of S. aureus, 96.5% of S. agalactiae, 96.9% of S. pneumoniae, 92.7% of Enterococcus faecalis, 91.3% of Staphylococcus haemolyticus, and 88% of Staphylococcus epidermidis but was least able to identify Enterococcus faecium (71.4% correct). More than 90% of gram-positive cocci were identified within 3 h. According to the NCCLS breakpoints, antimicrobial susceptibility testing with the VITEK 2 system gave 96% correct category agreement, 0.82% very major errors, 0.17% major errors, and 2.7% minor errors. Antimicrobial susceptibility testing showed category agreement from 94 to 100% for S. aureus, from 90 to 100% for CNS, from 91 to 100% for enterococci, from 96 to 100% for S. agalactiae, and from 91 to 100% for S. pneumoniae. Microorganism-antibiotic combinations that gave very major errors were CNS-erythromycin, CNS-oxacillin, enterococci-teicoplanin, and enterococci-high-concentration gentamicin. Major errors were observed for CNS-oxacillin and S. agalactiae-tetracycline combinations. In conclusion the results of this study indicate that the VITEK 2 system represents an accurate and acceptable means for performing identification and antibiotic susceptibility tests with medically relevant gram-positive cocci. PMID:11980942
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
Measurement and analysis of operating system fault tolerance
NASA Technical Reports Server (NTRS)
Lee, I.; Tang, D.; Iyer, R. K.
1992-01-01
This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Nurses' attitudes and perceived barriers to the reporting of medication administration errors.
Yung, Hai-Peng; Yu, Shu; Chu, Chi; Hou, I-Ching; Tang, Fu-In
2016-07-01
(1) To explore the attitudes and perceived barriers to reporting medication administration errors and (2) to understand the characteristics of - and nurses' feelings - about error reports. Under-reporting of medication administration errors is a global concern related to the safety of patient care. Understanding nurses' attitudes and perceived barriers to error reporting is the initial step to increasing the reporting rate. A cross-sectional, descriptive survey with a self-administered questionnaire was completed by the nurses of a medical centre hospital in Taiwan. A total of 306 nurses participated in the study. Nurses' attitudes towards medication administration error reporting were inclined towards positive. The major perceived barrier was fear of the consequences after reporting. The results demonstrated that 88.9% of medication administration errors were reported orally, whereas 19.0% were reported through the hospital internet system. Self-recrimination was the common feeling of nurses after the commission of an medication administration error. Even if hospital management encourages errors to be reported without recrimination, nurses' attitudes toward medication administration error reporting are not very positive and fear is the most prominent barrier contributing to underreporting. Nursing managers should establish anonymous reporting systems and counselling classes to create a secure atmosphere to reduce nurses' fear and provide incentives to encourage reporting. © 2016 John Wiley & Sons Ltd.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Knowledge of healthcare professionals about medication errors in hospitals
Abdel-Latif, Mohamed M. M.
2016-01-01
Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Medical errors in primary care clinics – a cross sectional study
2012-01-01
Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-09-28
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-01-01
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal. PMID:3931753
Miraldi Utz, Virginia
2017-01-01
Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.; Duffy, T. R.
REGRES allows a range of regression equations to be calculated for paired sets of data values in which both variables are subject to error (i.e. neither is the "independent" variable). Nonparametric regressions, based on medians of all possible pairwise slopes and intercepts, are treated in detail. Estimated slopes and intercepts are output, along with confidence limits, Spearman and Kendall rank correlation coefficients. Outliers can be rejected with user-determined stringency. Parametric regressions can be calculated for any value of λ (the ratio of the variances of the random errors for y and x)—including: (1) major axis ( λ = 1); (2) reduced major axis ( λ = variance of y/variance of x); (3) Y on Xλ = infinity; or (4) X on Y ( λ = 0) solutions. Pearson linear correlation coefficients also are output. REGRES provides an alternative to conventional isochron assessment techniques where bivariate normal errors cannot be assumed, or weighting methods are inappropriate.
Robot-Arm Dynamic Control by Computer
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.
1987-01-01
Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.
The GEnes in Myopia (GEM) study in understanding the aetiology of refractive errors.
Baird, Paul N; Schäche, Maria; Dirani, Mohamed
2010-11-01
Refractive errors represent the leading cause of correctable vision impairment and blindness in the world with an estimated 2 billion people affected. Refractive error refers to a group of refractive conditions including hypermetropia, myopia, astigmatism and presbyopia but relatively little is known about their aetiology. In order to explore the potential role of genetic determinants in refractive error the "GEnes in Myopia (GEM) study" was established in 2004. The findings that have resulted from this study have not only provided greater insight into the role of genes and other factors involved in myopia but have also gone some way to uncovering the aetiology of other refractive errors. This review will describe some of the major findings of the GEM study and their relative contribution to the literature, illuminate where the deficiencies are in our understanding of the development of refractive errors and how we will advance this field in the future. Copyright © 2010 Elsevier Ltd. All rights reserved.
Long-term academic stress increases the late component of error processing: an ERP study.
Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang
2014-05-01
Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors. Copyright © 2014 Elsevier B.V. All rights reserved.
An empirical assessment of taxic paleobiology.
Adrain, J M; Westrop, S R
2000-07-07
The analysis of major changes in faunal diversity through time is a central theme of analytical paleobiology. The most important sources of data are literature-based compilations of stratigraphic ranges of fossil taxa. The levels of error in these compilations and the possible effects of such error have often been discussed but never directly assessed. We compared our comprehensive database of trilobites to the equivalent portion of J. J. Sepkoski Jr.'s widely used global genus database. More than 70% of entries in the global database are inaccurate; however, as predicted, the error is randomly distributed and does not introduce bias.
Nurses' role in medication safety.
Choo, Janet; Hutchinson, Alison; Bucknall, Tracey
2010-10-01
To explore the nurse's role in the process of medication management and identify the challenges associated with safe medication management in contemporary clinical practice. Medication errors have been a long-standing factor affecting consumer safety. The nursing profession has been identified as essential to the promotion of patient safety. A review of literature on medication errors and the use of electronic prescribing in medication errors. Medication management requires a multidisciplinary approach and interdisciplinary communication is essential to reduce medication errors. Information technologies can help to reduce some medication errors through eradication of transcription and dosing errors. Nurses must play a major role in the design of computerized medication systems to ensure a smooth transition to such as system. The nurses' roles in medication management cannot be over-emphasized. This is particularly true when designing a computerized medication system. The adoption of safety measures during decision making that parallel those of the aviation industry safety procedures can provide some strategies to prevent medication error. Innovations in information technology offer potential mechanisms to avert adverse events in medication management for nurses. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
NASA Astrophysics Data System (ADS)
Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.
2013-03-01
Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Sakata, Shojiro; Fujisawa, Masaya
It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
A preliminary taxonomy of medical errors in family practice
Dovey, S; Meyers, D; Phillips, R; Green, L; Fryer, G; Galliher, J; Kappus, J; Grob, P
2002-01-01
Objective: To develop a preliminary taxonomy of primary care medical errors. Design: Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. Setting: The National Network for Family Practice and Primary Care Research. Participants: Family physicians. Main outcome measures: Medical error category, context, and consequence. Results: Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failures (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. Conclusions: This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors. PMID:12486987
A preliminary taxonomy of medical errors in family practice.
Dovey, S M; Meyers, D S; Phillips, R L; Green, L A; Fryer, G E; Galliher, J M; Kappus, J; Grob, P
2002-09-01
To develop a preliminary taxonomy of primary care medical errors. Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. The National Network for Family Practice and Primary Care Research. Family physicians. Medical error category, context, and consequence. Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failure (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors.
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2012-01-01
The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Human error in hospitals and industrial accidents: current concepts.
Spencer, F C
2000-10-01
Most data concerning errors and accidents are from industrial accidents and airline injuries. General Electric, Alcoa, and Motorola, among others, all have reported complex programs that resulted in a marked reduction in frequency of worker injuries. In the field of medicine, however, with the outstanding exception of anesthesiology, there is a paucity of information, most reports referring to the 1984 Harvard-New York State Study, more than 16 years ago. This scarcity of information indicates the complexity of the problem. It seems very unlikely that simple exhortation or additional regulations will help because the problem lies principally in the multiple human-machine interfaces that constitute modern medical care. The absence of success stories also indicates that the best methods have to be learned by experience. A liaison with industry should be helpful, although the varieties of human illness are far different from a standardized manufacturing process. Concurrent with the studies of industrial and nuclear accidents, cognitive psychologists have intensively studied how the brain stores and retrieves information. Several concepts have emerged. First, errors are not character defects to be treated by the classic approach of discipline and education, but are byproducts of normal thinking that occur frequently. Second, major accidents are rarely causedby a single error; instead, they are often a combination of chronic system errors, termed latent errors. Identifying and correcting these latent errors should be the principal focus for corrective planning rather than searching for an individual culprit. This nonpunitive concept of errors is a key basis for an effective reporting system, brilliantly demonstrated in aviation with the ASRS system developed more than 25 years ago. The ASRS currently receives more than 30,000 reports annually and is credited with the remarkable increase in safety of airplane travel. Adverse drug events constitute about 25% of hospital errors. In the future, the combination of new drugs and a vast amount of new information will additionally increase the possibilities for error. Two major advances in recent years have been computerization and active participation of the pharmacist with dispensing medications. Further investigation of hospital errors should concentrate primarily on latent system errors. Significant system changes will require broad staff participation throughout the hospital. This, in turn, should foster development of an institutional safety culture, rather than the popular attitude that patient safety responsibility is concentrated in the Quality Assurance-Risk Management division. Quality of service and patient safety are closely intertwined.
The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Schiesser, Emil R.
1998-01-01
Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.
Errors in veterinary practice: preliminary lessons for building better veterinary teams.
Kinnison, T; Guile, D; May, S A
2015-11-14
Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.
Nature of Medical Malpractice Claims Against Radiation Oncologists.
Marshall, Deborah; Tringale, Kathryn; Connor, Michael; Punglia, Rinaa; Recht, Abram; Hattangadi-Gluth, Jona
2017-05-01
To examine characteristics of medical malpractice claims involving radiation oncologists closed during a 10-year period. Malpractice claims filed against radiation oncologists from 2003 to 2012 collected by a nationwide liability insurance trade association were analyzed. Outcomes included the nature of claims and indemnity payments, including associated presenting diagnoses, procedures, alleged medical errors, and injury severity. We compared the likelihood of a claim resulting in payment in relation to injury severity categories (death as referent) using binomial logistic regression. There were 362 closed claims involving radiation oncology, 102 (28%) of which were paid, resulting in $38 million in indemnity payments. The most common alleged errors included "improper performance" (38% of closed claims, 18% were paid; 29% [$11 million] of total indemnity), "errors in diagnosis" (25% of closed claims, 46% were paid; 44% [$17 million] of total indemnity), and "no medical misadventure" (14% of closed claims, 8% were paid; less than 1% [$148,000] of total indemnity). Another physician was named in 32% of claims, and consent issues/breach of contract were cited in 18%. Claims for injury resulting in death represented 39% of closed claims and 25% of total indemnity. "Improper performance" was the primary alleged error associated with injury resulting in death. Compared with claims involving death, major temporary injury (odds ratio [OR] 2.8, 95% confidence interval [CI] 1.29-5.85, P=.009), significant permanent injury (OR 3.1, 95% CI 1.48-6.46, P=.003), and major permanent injury (OR 5.5, 95% CI 1.89-16.15, P=.002) had a higher likelihood of a claim resulting in indemnity payment. Improper performance was the most common alleged malpractice error. Claims involving significant or major injury were more likely to be paid than those involving death. Insights into the nature of liability claims against radiation oncologists may help direct efforts to improve quality of care and minimize the risk of being sued. Copyright © 2017 Elsevier Inc. All rights reserved.
Nature of Medical Malpractice Claims Against Radiation Oncologists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Deborah; Tringale, Kathryn; Connor, Michael
Purpose: To examine characteristics of medical malpractice claims involving radiation oncologists closed during a 10-year period. Methods and Materials: Malpractice claims filed against radiation oncologists from 2003 to 2012 collected by a nationwide liability insurance trade association were analyzed. Outcomes included the nature of claims and indemnity payments, including associated presenting diagnoses, procedures, alleged medical errors, and injury severity. We compared the likelihood of a claim resulting in payment in relation to injury severity categories (death as referent) using binomial logistic regression. Results: There were 362 closed claims involving radiation oncology, 102 (28%) of which were paid, resulting in $38more » million in indemnity payments. The most common alleged errors included “improper performance” (38% of closed claims, 18% were paid; 29% [$11 million] of total indemnity), “errors in diagnosis” (25% of closed claims, 46% were paid; 44% [$17 million] of total indemnity), and “no medical misadventure” (14% of closed claims, 8% were paid; less than 1% [$148,000] of total indemnity). Another physician was named in 32% of claims, and consent issues/breach of contract were cited in 18%. Claims for injury resulting in death represented 39% of closed claims and 25% of total indemnity. “Improper performance” was the primary alleged error associated with injury resulting in death. Compared with claims involving death, major temporary injury (odds ratio [OR] 2.8, 95% confidence interval [CI] 1.29-5.85, P=.009), significant permanent injury (OR 3.1, 95% CI 1.48-6.46, P=.003), and major permanent injury (OR 5.5, 95% CI 1.89-16.15, P=.002) had a higher likelihood of a claim resulting in indemnity payment. Conclusions: Improper performance was the most common alleged malpractice error. Claims involving significant or major injury were more likely to be paid than those involving death. Insights into the nature of liability claims against radiation oncologists may help direct efforts to improve quality of care and minimize the risk of being sued.« less
Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José
1999-01-01
Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543
Local Media Influence on Opting-Out from an Exception from Informed Consent Trial
Nelson, Maria J; DeIorio, Nicole M; MD, Terri Schmidt; Griffiths, Denise; Daya, Mohamud; Haywood, Liana; Zive, Dana; Newgard, Craig D
2010-01-01
Objectives News media are used for community education and notification in exception from informed consent clinical trials, yet their effectiveness as an added safeguard in such research remains unknown. We assessed the number of callers requesting opt-out bracelets following each local media report and described the errors and content within each media report. Methods We undertook a descriptive analysis of local media trial coverage (newspaper, television, radio, and weblog) and opt-out requests over a 41-month period at a single site participating in an exception from informed consent out-of-hospital trial. Two non-trial investigators independently assessed forty-one content-based media variables (including background, trial information, graphics, errors, publication information, assessment) using a standardized, semi-qualitative data collection tool. Major errors were considered serious misrepresentation of the trial purpose or protocol, whereas minor errors included misinformation unlikely to mislead the lay reader about the trial. We plotted the temporal relationship between opt-out bracelet requests and media reports. Descriptive information about the news sources and the trial coverage are presented. Results We collected 39 trial-related media reports (33 newspaper, 1 television, 1 radio, and 4 blogs). There were thirteen errors in 9 (23%) publications, 7 of which were major and 6 minor. Of 384 requests for 710 bracelets, 310 requests (80%) occurred within 4 days after trial media coverage. Graphical timeline representation of the data suggested a close association between media reports about the trial and requests for opt-out bracelets. Conclusions Based on results from a single site, local media coverage for an exception from informed consent clinical trial had a substantial portion of errors and appeared closely associated with opt-out requests. PMID:19682770
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Juhlin, Kristina; Norén, G. Niklas
2017-01-01
Abstract Purpose To develop a method for data‐driven exploration in pharmacovigilance and illustrate its use by identifying the key features of individual case safety reports related to medication errors. Methods We propose vigiPoint, a method that contrasts the relative frequency of covariate values in a data subset of interest to those within one or more comparators, utilizing odds ratios with adaptive statistical shrinkage. Nested analyses identify higher order patterns, and permutation analysis is employed to protect against chance findings. For illustration, a total of 164 000 adverse event reports related to medication errors were characterized and contrasted to the other 7 833 000 reports in VigiBase, the WHO global database of individual case safety reports, as of May 2013. The initial scope included 2000 features, such as patient age groups, reporter qualifications, and countries of origin. Results vigiPoint highlighted 109 key features of medication error reports. The most prominent were that the vast majority of medication error reports were from the United States (89% compared with 49% for other reports in VigiBase); that the majority of reports were sent by consumers (53% vs 17% for other reports); that pharmacists (12% vs 5.3%) and lawyers (2.9% vs 1.5%) were overrepresented; and that there were more medication error reports than expected for patients aged 2‐11 years (10% vs 5.7%), particularly in Germany (16%). Conclusions vigiPoint effectively identified key features of medication error reports in VigiBase. More generally, it reduces lead times for analysis and ensures reproducibility and transparency. An important next step is to evaluate its use in other data. PMID:28815800
Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S
2015-02-01
We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.
Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.
Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E
2017-04-01
A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.
2008-01-01
The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.
Fisseni, Gregor; Pentzek, Michael; Abholz, Heinz-Harald
2008-02-01
GPs' recollections about their 'most serious errors in treatment' and about the consequences for themselves. Does it make a difference, who (else) contributed to the error, or to its discovery or disclosure? Anonymous questionnaire study concerning the 'three most serious errors in your career as a GP'. The participating doctors were given an operational definition of 'serious error'. They applied a special recall technique, using patient-induced associations to bring to mind former 'serious errors'. The recall method and the semi-structured 25-item questionnaire used were developed and piloted by the authors. The items were analysed quantitatively and by qualitative content analysis. General practices in the North Rhine region in Germany: 32 GPs anonymously reported about 75 'most serious errors'. In more than half of the cases analysed, other people contributed considerably to the GPs' serious errors. Most of the errors were discovered and disclosed to the patient by doctors: either by the GPs themselves, or by colleagues. A lot of GPs suffered loss of reputation and loss of patients. However, the number of patients staying with their GP clearly exceeded the number leaving their GP, depending on who else contributed to the error, who discovered it and who disclosed it to the patient. The majority of patients still trusted their GP after a serious error, especially if the GP was not the only one who contributed to the error and if the GP played an active role in the discovery and disclosure or the error.
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.
A root cause analysis project in a medication safety course.
Schafer, Jason J
2012-08-10
To develop, implement, and evaluate team-based root cause analysis projects as part of a required medication safety course for second-year pharmacy students. Lectures, in-class activities, and out-of-class reading assignments were used to develop students' medication safety skills and introduce them to the culture of medication safety. Students applied these skills within teams by evaluating cases of medication errors using root cause analyses. Teams also developed error prevention strategies and formally presented their findings. Student performance was assessed using a medication errors evaluation rubric. Of the 211 students who completed the course, the majority performed well on root cause analysis assignments and rated them favorably on course evaluations. Medication error evaluation and prevention was successfully introduced in a medication safety course using team-based root cause analysis projects.
Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415
Main sources of errors in diagnosis of chronic radiation sickness (in Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soldatova, V.A.
1973-11-01
With the aim of finding out the main sources of errors in the diagnosis of chronic radiation sickness, the author analyzed a total of 500 cases of this sickness in roenigenologists and radiologists sent to the clinic to be examined according to occupational indications. lt was shown that the main source of errors when interpreting the observed deviations as occupational was underestimation of etiological significance of functional and organic diseases of the nervous system, endocrinevascular dystonia and also such diseases as hypochromic anemia and chronic infection. The majority of diagnostic errors is explained by insufficient knowledge of the main regularitymore » of forming the picture of chronic radiation sickness and by the absence of the necessary differential diagnosis with general somatic diseases. (auth)« less
Current pulse: can a production system reduce medical errors in health care?
Printezis, Antonios; Gopalakrishnan, Mohan
2007-01-01
One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.
Reducing wrong patient selection errors: exploring the design space of user interface techniques.
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.
Center of mass perception and inertial frames of reference.
Bingham, G P; Muchisky, M M
1993-11-01
Center of mass perception was investigated by varying the shape, size, and orientation of planar objects. Shape was manipulated to investigate symmetries as information. The number of reflective symmetry axes, the amount of rotational symmetry, and the presence of radial symmetry were varied. Orientation affected systematic errors. Judgments tended to undershoot the center of mass. Random errors increased with size and decreased with symmetry. Size had no effect on random errors for maximally symmetric objects, although orientation did. The spatial distributions of judgments were elliptical. Distribution axes were found to align with the principle moments of inertia. Major axes tended to align with gravity in maximally symmetric objects. A functional and physical account was given in terms of the repercussions of error. Overall, judgments were very accurate.
Fischer, Melissa A; Mazor, Kathleen M; Baril, Joann; Alper, Eric; DeMarco, Deborah; Pugnaire, Michele
2006-01-01
CONTEXT Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE To identify major factors and areas of tension in trainees' learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility. PMID:16704381
Analyzing temozolomide medication errors: potentially fatal.
Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee
2014-10-01
The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.
Organizational safety culture and medical error reporting by Israeli nurses.
Kagan, Ilya; Barnoy, Sivia
2013-09-01
To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.
Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James
2004-02-01
To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with
Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment
NASA Technical Reports Server (NTRS)
Dixon, T. H.; Wolf, S. Kornreich
1990-01-01
Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
A national physician survey of diagnostic error in paediatrics.
Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B
2016-10-01
This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
Chiang, Hui-Ying; Hsiao, Ya-Chu; Lee, Huan-Fang
Nurses' safety practices of medication administration, prevention of falls and unplanned extubations, and handover are essentials to patient safety. This study explored the prediction between such safety practices and work environment factors, workload, job satisfaction, and error-reporting culture of 1429 Taiwanese nurses. Nurses' job satisfaction, error-reporting culture, and one environmental factor of nursing quality were found to be major predictors of safety practices. The other environment factors related to professional development and participation in hospital affairs and nurses' workload had limited predictive effects on the safety practices. Increasing nurses' attention to patient safety by improving these predictors is recommended.
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Error Correcting Optical Mapping Data.
Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina
2018-05-26
Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Causes of low vision and blindness in rural Indonesia
Saw, S-M; Husain, R; Gazzard, G M; Koh, D; Widjaja, D; Tan, D T H
2003-01-01
Aim: To determine the prevalence rates and major contributing causes of low vision and blindness in adults in a rural setting in Indonesia Methods: A population based prevalence survey of adults 21 years or older (n=989) was conducted in five rural villages and one provincial town in Sumatra, Indonesia. One stage household cluster sampling procedure was employed where 100 households were randomly selected from each village or town. Bilateral low vision was defined as habitual VA (measured using tumbling “E” logMAR charts) in the better eye worse than 6/18 and 3/60 or better, based on the WHO criteria. Bilateral blindness was defined as habitual VA worse than 3/60 in the better eye. The anterior segment and lens of subjects with low vision or blindness (both unilateral and bilateral) (n=66) were examined using a portable slit lamp and fundus examination was performed using indirect ophthalmoscopy. Results: The overall age adjusted (adjusted to the 1990 Indonesia census population) prevalence rate of bilateral low vision was 5.8% (95% confidence interval (CI) 4.2 to 7.4) and bilateral blindness was 2.2% (95% CI 1.1 to 3.2). The rates of low vision and blindness increased with age. The major contributing causes for bilateral low vision were cataract (61.3%), uncorrected refractive error (12.9%), and amblyopia (12.9%), and the major cause of bilateral blindness was cataract (62.5%). The major causes of unilateral low vision were cataract (48.0%) and uncorrected refractive error (12.0%), and major causes of unilateral blindness were amblyopia (50.0%) and trauma (50.0%). Conclusions: The rates of habitual low vision and blindness in provincial Sumatra, Indonesia, are similar to other developing rural countries in Asia. Blindness is largely preventable, as the major contributing causes (cataract and uncorrected refractive error) are amenable to treatment. PMID:12928268
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
An improved error assessment for the GEM-T1 gravitational model
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.
1988-01-01
Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.
Changes to Hospital Inpatient Volume After Newspaper Reporting of Medical Errors.
Fukuda, Haruhisa
2017-06-30
The aim of this study was to investigate the influence of medical error case reporting by national newspapers on inpatient volume at acute care hospitals. A case-control study was conducted using the article databases of 3 major Japanese newspapers with nationwide circulation between fiscal years 2012 and 2013. Data on inpatient volume at acute care hospitals were obtained from a Japanese government survey between fiscal years 2011 and 2014. Panel data were constructed and analyzed using a difference-in-differences design. Acute care hospitals in Japan. Hospitals named in articles that included the terms "medical error" and "hospital" were designated case hospitals, which were matched with control hospitals using corresponding locations, nurse-to-patient ratios, and bed numbers. Medical error case reporting in newspapers. Changes to hospital inpatient volume after error reports. The sample comprised 40 case hospitals and 40 control hospitals. Difference-in-differences analyses indicated that newspaper reporting of medical errors was not significantly associated (P = 0.122) with overall inpatient volume. Medical error case reporting by newspapers showed no influence on inpatient volume. Hospitals therefore have little incentive to respond adequately and proactively to medical errors. There may be a need for government intervention to improve the posterror response and encourage better health care safety.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.
2017-01-01
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494
NASA Astrophysics Data System (ADS)
Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.
2014-12-01
We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.
Annual Tropical Cyclone Report 2011
2012-05-24
nuclear plant, still reeling after the tsunami disaster just a few months earlier.6 Operations at Kadena Air Base were put on hold 48 with major...conditions. Several of these early to mid-season forming TCs exhibited ―S‖ shaped, looping, or generally erratic tracks, with numerous passages near or over...track errors after -the fact to extend the data base (3) Mean forecast errors for all w arned systems in Northwest Pacific. 120-Hour Along Cross
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
NASA Technical Reports Server (NTRS)
Peterson, B. J.; Mellillo, J. M.
1984-01-01
If all biotic sinks of atmospheric CO2 reported were added a value of about 0.4 Gt C/yr would be found. For each category, a very high (non-conservative) estimate was used. This still does not provide a sufficient basis for achieving a balance between the sources and sinks of atmospheric CO2. The bulk of the discrepancy lies in a combination of errors in the major terms, the greatest being in a combination of errors in the major terms, the greatest being in the net biotic release and ocean uptake segments, but smaller errors or biases may exist in calculations of the rate of atmospheric CO2 increase and total fossil fuel use as well. The reason why biotic sinks are not capable of balancing the CO2 increase via nutrient-matching in the short-term is apparent from a comparison of the stoichiometry of the sources and sinks. The burning of fossil fuels and forest biomass releases much more CO2-carbon than is sequestered as organic carbon.
Dossett, Lesly A; Kauffmann, Rondi M; Lee, Jay S; Singh, Harkamal; Lee, M Catherine; Morris, Arden M; Jagsi, Reshma; Quinn, Gwendolyn P; Dimick, Justin B
2018-06-01
Our objective was to determine specialist physicians' attitudes and practices regarding disclosure of pre-referral errors. Physicians are encouraged to disclose their own errors to patients. However, no clear professional norms exist regarding disclosure when physicians discover errors in diagnosis or treatment that occurred at other institutions before referral. We conducted semistructured interviews of cancer specialists from 2 National Cancer Institute-designated Cancer Centers. We purposively sampled specialists by discipline, sex, and experience-level who self-described a >50% reliance on external referrals (n = 30). Thematic analysis of verbatim interview transcripts was performed to determine physician attitudes regarding disclosure of pre-referral medical errors; whether and how physicians disclose these errors; and barriers to providing full disclosure. Participants described their experiences identifying different types of pre-referral errors including errors of diagnosis, staging and treatment resulting in adverse events ranging from decreased quality of life to premature death. The majority of specialists expressed the belief that disclosure provided no benefit to patients, and might unnecessarily add to their anxiety about their diagnoses or prognoses. Specialists had varying practices of disclosure including none, non-verbal, partial, event-dependent, and full disclosure. They identified a number of barriers to disclosure, including medicolegal implications and damage to referral relationships, the profession's reputation, and to patient-physician relationships. Specialist physicians identify pre-referral errors but struggle with whether and how to provide disclosure, even when clinical circumstances force disclosure. Education- or communication-based interventions that overcome barriers to disclosing pre-referral errors warrant development.
Byrne, Eamonn; Bury, Gerard
2018-02-08
Incident reporting is vital to identifying pre-hospital medication safety issues because literature suggests that the majority of errors pre-hospital are self-identified. In 2016, the National Ambulance Service (NAS) reported 11 medication errors to the national body with responsibility for risk management and insurance cover. The Health Information and Quality Authority in 2014 stated that reporting of clinical incidents, of which medication errors are a subset, was not felt to be representative of the actual events occurring. Even though reporting systems are in place, the levels appear to be well below what might be expected. Little data is available to explain this apparent discrepancy. To identify, investigate and document the barriers to medication error reporting within the NAS. An independent moderator led four focus groups in March of 2016. A convenience sample of 18 frontline Paramedics and Advanced Paramedics from Cork City and County discussed medication errors and the medication error reporting process. The sessions were recorded and anonymised, and the data was analysed using a process of thematic analysis. Practitioners understood the value of reporting errors. Barriers to reporting included fear of consequences and ridicule, procedural ambiguity, lack of feedback and a perceived lack of both consistency and confidentiality. The perceived consequences for making an error included professional, financial, litigious and psychological. Staff appeared willing to admit errors in a psychologically safe environment. Barriers to reporting are in line with international evidence. Time constraints prevented achievement of thematic saturation. Further study is warranted.
Amori, Renee E; Pittas, Anastassios G; Siegel, Richard D; Kumar, Sanjaya; Chen, Jack S; Karnam, Suneel; Golden, Sherita H; Salem, Deeb N
2008-01-01
To describe characteristics of inpatient medical errors involving hypoglycemic medications and their impact on patient care. We conducted a cross-sectional analysis of medical errors and associated adverse events voluntarily reported by hospital employees and staff in 21 nonprofit, nonfederal health-care organizations in the United States that implemented a Web-based electronic error-reporting system (e-ERS) between August 1, 2000, and December 31, 2005. Persons reporting the errors determined the level of impact on patient care. The median duration of e-ERS use was 3.1 years, and 2,598 inpatient error reports involved insulin or orally administered hypoglycemic agents. Nursing staff provided 59% of the reports; physicians reported <2%. Approximately two-thirds of the errors (1,693 of 2,598) reached the patient. Errors that caused temporary harm necessitating major treatment or that caused permanent harm accounted for 1.5% of reports (40 of 2,598). Insulin was involved in 82% of reports, and orally administered hypoglycemic agents were involved in 18% of all reports (473 of 2,598). Sulfonylureas were implicated in 51.8% of reports involving oral hypoglycemic agents (9.4% of all reports). An e-ERS provides an accessible venue for reporting and tracking inpatient medical errors involving glucose-lowering medications. Results are limited by potential underreporting of events, particularly by physicians, and variations in the reporter perception of patient harm.
Mortaro, Alberto; Pascu, Diana; Zerman, Tamara; Vallaperta, Enrico; Schönsberg, Alberto; Tardivo, Stefano; Pancheri, Serena; Romano, Gabriele; Moretti, Francesca
2015-07-01
The role of the emergency medical dispatch centre (EMDC) is essential to ensure coordinated and safe prehospital care. The aim of this study was to implement an incident report (IR) system in prehospital emergency care management with a view to detecting errors occurring in this setting and guiding the implementation of safety improvement initiatives. An ad hoc IR form for the prehospital setting was developed and implemented within the EMDC of Verona. The form included six phases (from the emergency call to hospital admission) with the relevant list of potential error modes (30 items). This descriptive observational study considered the results from 268 consecutive days between February and November 2010. During the study period, 161 error modes were detected. The majority of these errors occurred in the resource allocation and timing phase (34.2%) and in the dispatch phase (31.0%). Most of the errors were due to human factors (77.6%), and almost half of them were classified as either moderate (27.9%) or severe (19.9%). These results guided the implementation of specific corrective actions, such as the adoption of a more efficient Medical Priority Dispatch System and the development of educational initiatives targeted at both EMDC staff and the population. Despite the intrinsic limits of IR methodology, results suggest how the implementation of an IR system dedicated to the emergency prehospital setting can act as a major driver for the development of a "learning organization" and improve both efficacy and safety of first aid care.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Fault tolerance in an inner-outer solver: A GVR-enabled case study
Zhang, Ziming; Chien, Andrew A.; Teranishi, Keita
2015-04-18
Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates. We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-streammore » snapshots, multi-version data structures with portable and rich error checking/recovery. Lastly, experimental results validate correct execution with low performance overhead under varied error conditions.« less
Jakobsen, Janus Christian
2014-10-01
Major depressive disorder afflicts an estimated 17% of individuals during their lifetimes at tremendous suffering and costs. Cognitive therapy and psychodynamic therapy may be effective treatment options for major depressive disorder, but the effects have only had limited assessment in systematic reviews. The two modern forms of psychotherapy, "third wave" cognitive therapy and mentalization-based treatment, have both gained some ground as treatments of psychiatric disorders. No randomised trial has compared the effects of these two interventions for major depressive disorder. We performed two systematic reviews with meta-analyses and trial sequential analyses using The Cochrane Collaboration methodology examining the effects of cognitive therapy and psycho-dynamic therapy for major depressive disorder. We developed a thorough treatment protocol for a randomised trial with low risks of bias (systematic error) and low risks of random errors ("play of chance") examining the effects of third wave' cognitive therapy versus mentalization-based treatment for major depressive disorder. We conducted a randomised trial according to good clinical practice examining the effects of "third wave" cognitive therapy versus mentalisation-based treatment for major depressive disorder. The first systematic review included five randomised trials examining the effects of psychodynamic therapy versus "no intervention' for major depressive disorder. Altogether the five trials randomised 365 participants who in each trial received similar antidepressants as co-interventions. All trials had high risk of bias. Four trials assessed "interpersonal psychotherapy" and one trial "short psychodynamic supportive psychotherapy". Both of these interventions are different forms of psychodynamic therapy. Meta-analysis showed that psychodynamic therapy significantly reduced depressive symptoms on the Hamilton Depression Rating Scale (HDRS) compared with "no intervention" (mean difference -3.01 (95% confidence interval -3.98 to -2.03; p = 0.00001), no significant heterogeneity between trials). Trial sequential analysis confirmed this result. The second systematic review included 12 randomised trials examining the effects of cognitive therapy versus "no intervention" for major depressive disorder. Altogether a total of 669 participants were randomised. All trials had high risk of bias. Meta-analysis showed that cognitive therapy significantly reduced depressive symptoms on the HDRS compared with "no intervention" (four trials; mean difference -3.05 (95% confidence interval, -5.23 to -0.87; p = 0.006)). Trial sequential analysis could not confirm this result. The trial protocol showed that it seemed feasible to conduct a randomised trial with low risks of bias and low risks of random errors examining the effects of "third wave" cognitive therapy versus mentalization-based therapy in a setting in the Danish healthcare system. It turned out to be much more difficult to recruit participants in the randomised trial than expected. We only included about half of the planned participants. The results from the randomised trial showed that participants randomised to "third wave" therapy compared with participants randomised to mentalization-based treatment had borderline significantly lower HDRS scores at 18 weeks in an unadjusted analysis (mean difference -4.14 score; 95% CI -8.30 to 0.03; p = 0.051). In the adjusted analysis, the difference was significant (p = 0.039). Five (22.7%) of the participants randomised to "third wave" cognitive therapy had remission at 18 weeks versus none of the participants randomised to mentalization-based treatment (p = 0.049). Sequential analysis showed that these findings could be due to random errors. No significant differences between the two groups was found regarding Beck's Depression Inventory (BDI II), Symptom Checklist 90 Revised (SCL 90-R), and The World Health Organization-Five Well-being Index 1999 (WHO 5). We concluded that cognitive therapy and psychodynamic therapy might be effective interventions for depression measured on HDRS and BDI, but the review results might be erroneous due to risks of bias and random errors. Furthermore, the effects seem relatively small. The trial protocol showed that it was possible to develop a protocol for a randomised trial examining the effects of "third wave" cognitive therapy versus mentalization-based treatment with low risks of bias and low risks of random errors. Our trial results showed that "third wave" cognitive therapy might be a more effective intervention for depressive symptoms measured on the HDRS compared with mentalization-based treatment. The two interventions did not seem to differ significantly regarding BDI II, SCL 90-R, and WHO 5. More randomised trials with low risks of bias and low risks of random errors are needed to assess the effects of cognitive therapy, psychodynamic therapy, "third wave" cognitive therapy, and mentalization-based treatment.
A survey of community members' perceptions of medical errors in Oman
Al-Mandhari, Ahmed S; Al-Shafaee, Mohammed A; Al-Azri, Mohammed H; Al-Zakwani, Ibrahim S; Khan, Mushtaq; Al-Waily, Ahmed M; Rizvi, Syed
2008-01-01
Background Errors have been the concern of providers and consumers of health care services. However, consumers' perception of medical errors in developing countries is rarely explored. The aim of this study is to assess community members' perceptions about medical errors and to analyse the factors affecting this perception in one Middle East country, Oman. Methods Face to face interviews were conducted with heads of 212 households in two villages in North Al-Batinah region of Oman selected because of close proximity to the Sultan Qaboos University (SQU), Muscat, Oman. Participants' perceived knowledge about medical errors was assessed. Responses were coded and categorised. Analyses were performed using Pearson's χ2, Fisher's exact tests, and multivariate logistic regression model wherever appropriate. Results Seventy-eight percent (n = 165) of participants believed they knew what was meant by medical errors. Of these, 34% and 26.5% related medical errors to wrong medications or diagnoses, respectively. Understanding of medical errors was correlated inversely with age and positively with family income. Multivariate logistic regression revealed that a one-year increase in age was associated with a 4% reduction in perceived knowledge of medical errors (CI: 1% to 7%; p = 0.045). The study found that 49% of those who believed they knew the meaning of medical errors had experienced such errors. The most common consequence of the errors was severe pain (45%). Of the 165 informed participants, 49% felt that an uncaring health care professional was the main cause of medical errors. Younger participants were able to list more possible causes of medical errors than were older subjects (Incident Rate Ratio of 0.98; p < 0.001). Conclusion The majority of participants believed they knew the meaning of medical errors. Younger participants were more likely to be aware of such errors and could list one or more causes. PMID:18664245
Left neglect dyslexia and the effect of stimulus duration.
Arduino, Lisa S; Vallar, Giuseppe; Burani, Cristina
2006-01-01
The present study investigated the effects of the duration of the stimulus on the reading performance of right-brain-damaged patients with left neglect dyslexia. Three Italian patients read aloud words and nonwords, under conditions of unlimited time of stimulus exposure and of timed presentation. In the untimed condition, the majority of the patients' errors involved the left side of the letter string (i.e., neglect dyslexia errors). Conversely, in the timed condition, although the overall level of performance decreased, errors were more evenly distributed across the whole letter string (i.e., visual - nonlateralized - errors). This reduction of neglect errors with a reduced time of presentation of the stimulus may reflect the read out of elements of the letter string from a preserved visual storage component, such as iconic memory. Conversely, a time-unlimited presentation of the stimulus may bring about the rightward bias that characterizes the performance of neglect patients, possibly by a capture of the patients' attention by the final (rightward) letters of the string.
Effect of phase errors in stepped-frequency radar systems
NASA Astrophysics Data System (ADS)
Vanbrundt, H. E.
1988-04-01
Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.
Multi-muscle FES force control of the human arm for arbitrary goals.
Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M
2014-05-01
We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.
Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.
Baldwin, Abigail; Rodriguez, Elizabeth S
2016-02-01
The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.
Patient-centered computing: can it curb malpractice risk?
Bartlett, E E
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors.
Patient-centered computing: can it curb malpractice risk?
Bartlett, E. E.
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors. PMID:8130563
ERIC Educational Resources Information Center
Asassfeh, Sahail M.
2013-01-01
Corrective feedback (CF), the implicit or explicit information learners receive indicating a gap between their current, compared to the desired, performance, has been an area of interest for EFL researchers during the last few decades. This study, conducted on 139 English-major prospective EFL teachers, assessed the impact of two CF types…
Combustion Device Failures During Space Shuttle Main Engine Development
NASA Technical Reports Server (NTRS)
Goetz, Otto K.; Monk, Jan C.
2005-01-01
Major Causes: Limited Initial Materials Properties. Limited Structural Models - especially fatigue. Limited Thermal Models. Limited Aerodynamic Models. Human Errors. Limited Component Test. High Pressure. Complicated Control.
Reflection of medical error highlighted on media in Turkey: A retrospective study
Isik, Oguz; Bayin, Gamze; Ugurluoglu, Ozgur
2016-01-01
Objective: This study was performed with the aim of identifying how news on medical errors have be transmitted, and how the types, reasons, and conclusions of medical errors have been reflected to by the media in Turkey. Methods: A content analysis method was used in the study, and in this context, the data for the study was acquired by scanning five newspapers with the top editions on the national basis between the years 2012 and 2015 for the news about medical errors. Some specific selection criteria was used for the scanning of resulted news, and 116 news items acquired as a result of all the eliminations. Results: According to the results of the study; the vast majority of medical errors (40.5%) transmitted by the news resulted from the negligence of the medical staff. The medical errors were caused by physicians in the ratio of 74.1%, they most commonly occurred in state hospitals (31.9%). Another important result of the research was that medical errors resulted in either patient death to a large extent (51.7%), or permanent damage and disability to patients (25.0%). Conclusion: The news concerning medical errors provided information about the types, causes, and the results of these medical errors. It also reflected the media point of view on the issue. The examination of the content of the medical errors reported by the media were important which calls for appropriate interventions to avoid and minimize the occurrence of medical errors by improving the healthcare delivery system. PMID:27882026
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
Hester, Robert; Murphy, Kevin; Brown, Felicity L; Skilleter, Ashley J
2010-11-17
Punishing an error to shape subsequent performance is a major tenet of individual and societal level behavioral interventions. Recent work examining error-related neural activity has identified that the magnitude of activity in the posterior medial frontal cortex (pMFC) is predictive of learning from an error, whereby greater activity in this region predicts adaptive changes in future cognitive performance. It remains unclear how punishment influences error-related neural mechanisms to effect behavior change, particularly in key regions such as pMFC, which previous work has demonstrated to be insensitive to punishment. Using an associative learning task that provided monetary reward and punishment for recall performance, we observed that when recall errors were categorized by subsequent performance--whether the failure to accurately recall a number-location association was corrected at the next presentation of the same trial--the magnitude of error-related pMFC activity predicted future correction. However, the pMFC region was insensitive to the magnitude of punishment an error received and it was the left insula cortex that predicted learning from the most aversive outcomes. These findings add further evidence to the hypothesis that error-related pMFC activity may reflect more than a prediction error in representing the value of an outcome. The novel role identified here for the insular cortex in learning from punishment appears particularly compelling for our understanding of psychiatric and neurologic conditions that feature both insular cortex dysfunction and a diminished capacity for learning from negative feedback or punishment.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.;
2012-01-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be 00:52:00, 00:54:00,..., and 01:04:00. The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.25.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buehler, R.; Burgess, J. M.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Cutini, S.; Dennis, B. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fortin, P.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grillo, L.; Grove, J. E.; Gruber, D.; Guiriec, S.; Hadasch, D.; Hayashida, M.; Hays, E.; Horan, D.; Iafrate, G.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Kippen, R. M.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Meegan, C.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Murphy, R.; Naumann-Godo, M.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Paciesas, W. S.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Petrosian, V.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sbarra, C.; Schwartz, R. A.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spinelli, P.; Takahashi, H.; Tanaka, T.; Tanaka, Y.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tolbert, A. K.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wilson-Hodge, C.; Wood, D. L.; Wood, K. S.; Yang, Z.
2012-04-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be "00:52:00," "00:54:00," ... , and "01:04:00." The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa
2016-02-01
To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.
Medical errors; causes, consequences, emotional response and resulting behavioral change
Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed
2016-01-01
Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
Ebola, team communication, and shame: but shame on whom?
Shannon, Sarah E
2015-01-01
Examined as an isolated situation, and through the lens of a rare and feared disease, Mr. Duncan's case seems ripe for second-guessing the physicians and nurses who cared for him. But viewed from the perspective of what we know about errors and team communication, his case is all too common. Nearly 440,000 patient deaths in the U.S. each year may be attributable to medical errors. Breakdowns in communication among health care teams contribute in the majority of these errors. The culture of health care does not seem to foster functional, effective communication between and among professionals. Why? And more importantly, why do we not do something about it?
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
CME Velocity and Acceleration Error Estimates Using the Bootstrap Method
NASA Technical Reports Server (NTRS)
Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji
2017-01-01
The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Quotation accuracy in medical journal articles-a systematic review and meta-analysis.
Jergas, Hannah; Baethge, Christopher
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
Zhang, Wenjian; Huynh, Carolyn P; Abramovitch, Kenneth; Leon, Inga-Lill K; Arvizu, Liliana
2012-06-01
The objective of this study was to compare the technical errors of intraoral radiographs exposed on film v photostimulable phosphor (PSP) plates. The intraoral radiographic images exposed on phantoms from preclinical practical exams of dental and dental hygiene students were used. Each exam consisted of 10 designated periapical and bitewing views. A total of 107 film sets and 122 PSP sets were evaluated for technique errors, including placement, elongation, foreshortening, overlapping, cone cut, receptor bending, density, mounting, dot in apical area, and others. Some errors were further subcategorized as minor, major, or remake depending on the severity. The percentages of radiographs with various errors were compared between film and PSP by the Fisher's Exact Test. Compared with film, there was significantly less PSP foreshortening, elongation, and bending errors, but significantly more placement and overlapping errors. Using a wrong sized receptor due to the similarity of the color of the package sleeves is a unique PSP error. Optimum image quality is attainable with PSP plates as well as film. When switching from film to a PSP digital environment, more emphasis is necessary for placing the PSP plates, especially those with excessive packet edge, and then correcting the corresponding angulation for the beam alignment. Better design for improving intraoral visibility and easy identification of different sized PSP will improve the clinician's technical performance with this receptor.
[Responsibility due to medication errors in France: a study based on SHAM insurance data].
Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P
2015-03-01
The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
ERIC Educational Resources Information Center
Chan, Alice Yin Wa
2005-01-01
Building on the results of a small-scale survey which investigated the general use of dictionaries by university English majors in Hong Kong using a questionnaire survey and their specific use of dictionaries using an error correction task, this article discusses the tactics these students employed and the problems they encountered when using a…
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
Rectifying calibration error of Goldmann applanation tonometer is easy!
Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar
2014-11-01
Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley.
Dhungel, Deepa; Shrestha, Gauri Shankar
2017-12-21
Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported symptoms are blurred vision, dry eye and watering of the eye. The visual symptoms are more correlated with astigmatism.
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Steering without navigation equipment: the lamentable state of Australian health policy reform
2009-01-01
Background Commentary on health policy reform in Australia often commences with an unstated logical error: Australians' health is good, therefore the Australian Health System is good. This possibly explains the disconnect between the options discussed, the areas needing reform and the generally self-congratulatory tone of the discussion: a good system needs (relatively) minor improvement. Results This paper comments on some issues of particular concern to Australian health policy makers and some areas needing urgent reform. The two sets of issues do not overlap. It is suggested that there are two fundamental reasons for this. The first is the failure to develop governance structures which promote the identification and resolution of problems according to their importance. The second and related failure is the failure to equip the health services industry with satisfactory navigation equipment - independent research capacity, independent reporting and evaluation - on a scale commensurate with the needs of the country's largest industry. These two failures together deprive the health system - as a system - of the chief driver of progress in every successful industry in the 20th Century. Conclusion Concluding comment is made on the National Health and Hospitals Reform Commission (NHHRC). This continued the tradition of largely evidence free argument and decision making. It failed to identify and properly analyse major system failures, the reasons for them and the form of governance which would maximise the likelihood of future error leaning. The NHHRC itself failed to error learn from past policy failures, a key lesson from which is that a major - and possibly the major - obstacle to reform, is government itself. The Commission virtually ignored the issue of governance. The endorsement of a monopolised system, driven by benevolent managers will miss the major lesson of history which is illustrated by Australia's own failures. PMID:19948044
Mehrad, Mitra; Chernock, Rebecca D; El-Mofty, Samir K; Lewis, James S
2015-12-01
Medical error is a significant problem in the United States, and pathologic diagnoses are a significant source of errors. Prior studies have shown that second-opinion pathology review results in clinically major diagnosis changes in approximately 0.6% to 5.8% of patients. The few studies specifically on head and neck pathology have suggested rates of changed diagnoses that are even higher. Objectives .- To evaluate the diagnostic discrepancy rates in patients referred to our institution, where all such cases are reviewed by a head and neck subspecialty service, and to identify specific areas with more susceptibility to errors. Five hundred consecutive, scanned head and neck pathology reports from patients referred to our institution were compared for discrepancies between the outside and in-house diagnoses. Major discrepancies were defined as those resulting in a significant change in patient clinical management and/or prognosis. Major discrepancies occurred in 20 cases (4% overall). Informative follow-up material was available on 11 of the 20 patients (55.0%), among whom, the second opinion was supported in 11 of 11 cases (100%). Dysplasia versus invasive squamous cell carcinoma was the most common (7 of 20; 35%) area of discrepancy, and by anatomic subsite, the sinonasal tract (4 of 21; 19.0%) had the highest rate of discrepant diagnoses. Of the major discrepant diagnoses, 12 (12 of 20; 60%) involved a change from benign to malignant, one a change from malignant to benign (1 of 20; 5%), and 6 involved tumor classification (6 of 20; 30%). Head and neck pathology is a relatively high-risk area, prone to erroneous diagnoses in a small fraction of patients. This study supports the importance of second-opinion review by subspecialized pathologists for the best care of patients.
Drug error in paediatric anaesthesia: current status and where to go now.
Anderson, Brian J
2018-06-01
Medication errors in paediatric anaesthesia and the perioperative setting continue to occur despite widespread recognition of the problem and published advice for reduction of this predicament at international, national, local and individual levels. Current literature was reviewed to ascertain drug error rates and to appraise causes and proposed solutions to reduce these errors. The medication error incidence remains high. There is documentation of reduction through identification of causes with consequent education and application of safety analytics and quality improvement programs in anaesthesia departments. Children remain at higher risk than adults because of additional complexities such as drug dose calculations, increased susceptibility to some adverse effects and changes associated with growth and maturation. Major improvements are best made through institutional system changes rather than a commitment to do better on the part of each practitioner. Medication errors in paediatric anaesthesia represent an important risk to children and most are avoidable. There is now an understanding of the genesis of adverse drug events and this understanding should facilitate the implementation of known effective countermeasures. An institution-wide commitment and strategy are the basis for a worthwhile and sustained improvement in medication safety.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
Flight Evaluation of Center-TRACON Automation System Trajectory Prediction Process
NASA Technical Reports Server (NTRS)
Williams, David H.; Green, Steven M.
1998-01-01
Two flight experiments (Phase 1 in October 1992 and Phase 2 in September 1994) were conducted to evaluate the accuracy of the Center-TRACON Automation System (CTAS) trajectory prediction process. The Transport Systems Research Vehicle (TSRV) Boeing 737 based at Langley Research Center flew 57 arrival trajectories that included cruise and descent segments; at the same time, descent clearance advisories from CTAS were followed. Actual trajectories of the airplane were compared with the trajectories predicted by the CTAS trajectory synthesis algorithms and airplane Flight Management System (FMS). Trajectory prediction accuracy was evaluated over several levels of cockpit automation that ranged from a conventional cockpit to performance-based FMS vertical navigation (VNAV). Error sources and their magnitudes were identified and measured from the flight data. The major source of error during these tests was found to be the predicted winds aloft used by CTAS. The most significant effect related to flight guidance was the cross-track and turn-overshoot errors associated with conventional VOR guidance. FMS lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and airplane performance model errors.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
The computation of equating errors in international surveys in education.
Monseur, Christian; Berezner, Alla
2007-01-01
Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.
2011-12-01
Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major contributor to the total bias both during the summer and winter seasons. Missed precipitation, most likely light rain and rain over snow cover, has significant effect on soil moisture and are less capable of producing runoff that results runoff dependency on the hit bias only.
Sairanen, V; Kuusela, L; Sipilä, O; Savolainen, S; Vanhatalo, S
2017-02-15
Diffusion Tensor Imaging (DTI) is commonly challenged by subject motion during data acquisition, which often leads to corrupted image data. Currently used procedure in DTI analysis is to correct or completely reject such data before tensor estimations, however assessing the reliability and accuracy of the estimated tensor in such situations has evaded previous studies. This work aims to define the loss of data accuracy with increasing image rejections, and to define a robust method for assessing reliability of the result at voxel level. We carried out simulations of every possible sub-scheme (N=1,073,567,387) of Jones30 gradient scheme, followed by confirming the idea with MRI data from four newborn and three adult subjects. We assessed the relative error of the most commonly used tensor estimates for DTI and tractography studies, fractional anisotropy (FA) and the major orientation vector (V1), respectively. The error was estimated using two measures, the widely used electric potential (EP) criteria as well as the rotationally variant condition number (CN). Our results show that CN and EP are comparable in situations with very few rejections, but CN becomes clearly more sensitive to depicting errors when more gradient vectors and images were rejected. The error in FA and V1 was also found depend on the actual FA level in the given voxel; low actual FA levels were related to high relative errors in the FA and V1 estimates. Finally, the results were confirmed with clinical MRI data. This showed that the errors after rejections are, indeed, inhomogeneous across brain regions. The FA and V1 errors become progressively larger when moving from the thick white matter bundles towards more superficial subcortical structures. Our findings suggest that i) CN is a useful estimator of data reliability at voxel level, and ii) DTI preprocessing with data rejections leads to major challenges when assessing brain tissue with lower FA levels, such as all newborn brain, as well as the adult superficial, subcortical areas commonly traced in precise connectivity analyses between cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.
Accuracy of Press Reports in Astronomy
NASA Astrophysics Data System (ADS)
Schaefer, B. E.; Hurley, K.; Nemiroff, R. J.; Branch, D.; Perlmutter, S.; Schaefer, M. W.; Consolmagno, G. J.; McSween, H.; Strom, R.
1999-12-01
Most Americans learn about modern science from press reports, while such articles have a bad reputation among scientists. We have performed a study of 403 news articles on three topics (gamma-ray astronomy, supernovae, and Mars) to quantitatively answer the questions 'How accurate are press reports of astronomy?' and 'What fraction of the basic science claims in the press are correct?' We have taken all articles on the topics from five news sources (UPI, NYT, S&T, SN, and 5 newspapers) for one decade (1987-1996). All articles were evaluated for a variety of errors, ranging from the fundamental to the trivial. For 'trivial' errors, S&T and SN were virtually perfect while the various newspapers averaged roughly one trivial error every two articles. For meaningful errors, we found that none of our 403 articles significantly mislead the reader or misrepresented the science. So a major result of our study is that reporters should be rehabilitated into the good graces of astronomers, since they are actually doing a good job. For our second question, we rated each story with the probability that its basic new science claim is correct. We found that the average probability over all stories is 70%, regardless of source, topic, importance, or quoted pundit. How do we reconcile our findings that the press does not make significant errors yet the basic science presented is 30% wrong? The reason is that the nature of news reporting is to present front-line science and the nature of front-line science is that reliable conclusions have not yet been reached. So a second major result of our study is to make the distinction between textbook science (with reliability near 100%) and front-line science which you read in the press (with reliability near 70%).
Research Design and Statistical Methods in Indian Medical Journals: A Retrospective Survey
Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S.; Mayya, Shreemathi S.
2015-01-01
Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems. PMID:25856194
Research design and statistical methods in Indian medical journals: a retrospective survey.
Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S
2015-01-01
Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.
Kinetic energy budgets in areas of intense convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Berecek, E. M.; Ebel, D. M.; Jedlovec, G. J.
1980-01-01
A kinetic energy budget analysis of the AVE-SESAME 1 period which coincided with the deadly Red River Valley tornado outbreak is presented. Horizontal flux convergence was found to be the major kinetic energy source to the region, while cross contour destruction was the major sink. Kinetic energy transformations were dominated by processes related to strong jet intrusion into the severe storm area. A kinetic energy budget of the AVE 6 period also is presented. The effects of inherent rawinsonde data errors on widely used basic kinematic parameters, including velocity divergence, vorticity advection, and kinematic vertical motion are described. In addition, an error analysis was performed in terms of the kinetic energy budget equation. Results obtained from downward integration of the continuity equation to obtain kinematic values of vertical motion are described. This alternate procedure shows promising results in severe storm situations.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Nearby Exo-Earth Astrometric Telescope (NEAT)
NASA Technical Reports Server (NTRS)
Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.
2011-01-01
NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
From unseen to seen: tackling the global burden of uncorrected refractive errors.
Durr, Nicholas J; Dave, Shivang R; Lage, Eduardo; Marcos, Susana; Thorn, Frank; Lim, Daryl
2014-07-11
Worldwide, more than one billion people suffer from poor vision because they do not have the eyeglasses they need. Their uncorrected refractive errors are a major cause of global disability and drastically reduce productivity, educational opportunities, and overall quality of life. The problem persists most prevalently in low-resource settings, even though prescription eyeglasses serve as a simple, effective, and largely affordable solution. In this review, we discuss barriers to obtaining, and approaches for providing, refractive eye care. We also highlight emerging technologies that are being developed to increase the accessibility of eye care. Finally, we describe opportunities that exist for engineers to develop new solutions to positively impact the diagnosis and treatment of correctable refractive errors in low-resource settings.
Accounting for substitution and spatial heterogeneity in a labelled choice experiment.
Lizin, S; Brouwer, R; Liekens, I; Broeckx, S
2016-10-01
Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
2017-01-01
Background Dispensing errors are inevitable occurrences in community pharmacies across the world. Objective This study aimed to identify the community pharmacists' perception towards dispensing errors in the community pharmacies in Gondar town, Northwest Ethiopia. Methods A cross-sectional study was conducted among 47 community pharmacists selected through convenience sampling. Data were analyzed using SPSS version 20. Descriptive statistics, Mann–Whitney U test, and Pearson's Chi-square test of independence were conducted with P ≤ 0.05 considered statistically significant. Result The majority of respondents were in the 23–28-year age group (N = 26, 55.3%) and with at least B.Pharm degree (N = 25, 53.2%). Poor prescription handwriting and similar/confusing names were perceived to be the main contributing factors while all the strategies and types of dispensing errors were highly acknowledged by the respondents. Group differences (P < 0.05) in opinions were largely due to educational level and age. Conclusion Dispensing errors were associated with prescribing quality and design of dispensary as well as dispensing procedures. Opinion differences relate to age and educational status of the respondents. PMID:28612023
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Asmelashe Gelayee, Dessalegn; Binega Mekonnen, Gashaw
2017-01-01
Dispensing errors are inevitable occurrences in community pharmacies across the world. This study aimed to identify the community pharmacists' perception towards dispensing errors in the community pharmacies in Gondar town, Northwest Ethiopia. A cross-sectional study was conducted among 47 community pharmacists selected through convenience sampling. Data were analyzed using SPSS version 20. Descriptive statistics, Mann-Whitney U test, and Pearson's Chi-square test of independence were conducted with P ≤ 0.05 considered statistically significant. The majority of respondents were in the 23-28-year age group ( N = 26, 55.3%) and with at least B.Pharm degree ( N = 25, 53.2%). Poor prescription handwriting and similar/confusing names were perceived to be the main contributing factors while all the strategies and types of dispensing errors were highly acknowledged by the respondents. Group differences ( P < 0.05) in opinions were largely due to educational level and age. Dispensing errors were associated with prescribing quality and design of dispensary as well as dispensing procedures. Opinion differences relate to age and educational status of the respondents.
Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chillarege, Ram
1986-01-01
A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.
Unintentional comedy - errors in movies and educational material - as a teaching tool
NASA Astrophysics Data System (ADS)
Stein, S.; Elling, R. P.; Salaree, A.; Wysession, M. E.
2017-12-01
Geoscientists have long enjoyed pointing out scientific boo-boos in movies and other media. A saving grace of such errors is that they can be very useful in classes. Showing a video clip or image and having students identify and assess the errors is both entertaining and educational. Some boo-boos are obvious, such as a volcano erupting under Los Angeles. Others give students opportunities for thought and sometimes calculations. Is it possible to have the magnitude 10.5 earthquake that gave a 2004 NBC miniseries its name? What's wrong with the explanation of how seismic waves move from one medium to another given in the geology lecture in "The Core?" What are three major errors in an often-shown animation of tsunami generation (below; https://serc.carleton.edu/NAGTWorkshops/hazards/visualizations/tsunami.html)? Discussing such questions develops critical thinking and healthy skepticism. It also opens the topic of the boundary between simplifications needed for dramatic purposes and substantive errors. It would make sense for instructors who use boo-boos in class to come up with a way of sharing examples and experiences.
Mohamed Saini, Suriati; Muhamad Radzi, Azizah; Abdul Rahman, Abdul Hamid
2012-06-01
The serotonin transporter promoter (5-HTTLPR) is a potential susceptibility locus in the pathogenesis of major depressive disorder. However, data from Malaysia is lacking. The present study aimed to determine the association between the homozygous short variant of the serotonin transporter promoter gene (5-HTTLPR) with major depressive disorder. This is a candidate gene case-control association study. The sample consists of 55 major depressive disorder probands and 66 controls. They were Malaysian descents and were unrelated. The Axis I diagnosis was determined using Mini International Neuropsychiatric Interview (M.I.N.I.). The control group comprised healthy volunteers without personal psychiatric history and family history of mood disorders. Participants' blood was sent to the Institute Medical Research for genotyping. The present study failed to detect an association between 5-HTTLPR ss genotype with major depressive disorder (χ(2) = 3.67, d.f. = 1, P = 0.055, odds ratio 0.25, 95% confidence interval = 0.07-1.94). Sub-analysis revealed that the frequency of l allele in healthy controls was higher (78.0%) than that of Caucasian and East Asian population. However, in view of the small sample size this study may be prone to type II error (and type I error). This preliminary study suggests that the homozygous short variant of the 5-HTTLPR did not appear to be a risk factor for increasing susceptibility to major depressive disorder. Copyright © 2012 Blackwell Publishing Asia Pty Ltd.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela
2015-12-01
Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma
NASA Technical Reports Server (NTRS)
Fisher, Brad L.
2004-01-01
The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.
1998-01-01
Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.
Dimensional control of die castings
NASA Astrophysics Data System (ADS)
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.
Trends in software reliability for digital flight control
NASA Technical Reports Server (NTRS)
Hecht, H.; Hecht, M.
1983-01-01
Software error data of major recent Digital Flight Control Systems Development Programs. The report summarizes the data, compare these data with similar data from previous surveys and identifies trends and disciplines to improve software reliability.
Quotation accuracy in medical journal articles—a systematic review and meta-analysis
Jergas, Hannah
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420
Arul, Pitchaikaran; Pushparaj, Magesh; Pandian, Kanmani; Chennimalai, Lingasamy; Rajendran, Karthika; Selvaraj, Eniya; Masilamani, Suresh
2018-01-01
An important component of laboratory medicine is preanalytical phase. Since laboratory report plays a major role in patient management, more importance should be given to the quality of laboratory tests. The present study was undertaken to find the prevalence and types of preanalytical errors at a tertiary care hospital in South India. In this cross-sectional study, a total of 118,732 samples ([62,474 outpatient department [OPD] and 56,258 inpatient department [IPD]) were received in hematology laboratory. These samples were analyzed for preanalytical errors such as misidentification, incorrect vials, inadequate samples, clotted samples, diluted samples, and hemolyzed samples. The overall prevalence of preanalytical errors found was 513 samples, which is 0.43% of the total number of samples received. The most common preanalytical error observed was inadequate samples followed by clotted samples. Overall frequencies (both OPD and IPD) of preanalytical errors such as misidentification, incorrect vials, inadequate samples, clotted samples, diluted samples, and hemolyzed samples were 0.02%, 0.05%, 0.2%, 0.12%, 0.02%, and 0.03%, respectively. The present study concluded that incorrect phlebotomy techniques due to lack of awareness is the main reason for preanalytical errors. This can be avoided by proper communication and coordination between laboratory and wards, proper training and continuing medical education programs for laboratory and paramedical staffs, and knowledge of the intervening factors that can influence laboratory results.
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1999-01-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1998-09-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less
Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.
Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli
2018-03-13
The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.
Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid
2018-04-01
Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.
Comparison of direct and heterodyne detection optical intersatellite communication links
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1987-01-01
The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Mino-León, Dolores; Reyes-Morales, Hortensia; Jasso, Luis; Douvoba, Svetlana Vladislavovna
2012-06-01
Inappropriate prescription is a relevant problem in primary health care settings in Mexico, with potentially harmful consequences for patients. To evaluate the effectiveness of incorporating a pharmacist into primary care health team to reduce prescription errors for patients with diabetes and/or hypertension. One Family Medicine Clinic from the Mexican Institute of Social Security in Mexico City. A "pharmacotherapy intervention" provided by pharmacists through a quasi experimental (before-after) design was carried out. Physicians who allowed access to their diabetes and/or hypertensive patients' medical records and prescriptions were included in the study. Prescription errors were classified as "filling", "clinical" or "both". Descriptive analysis, identification of potential drug-drug interactions (pD-DI), and comparison of the proportion of patients with prescriptions with errors detected "before" and "after" intervention were performed. Decrease in the proportion of patients who received prescriptions with errors after the intervention. Pharmacists detected at least one type of error in 79 out of 160 patients. Errors were "clinical", "both" and "filling" in 47, 21 and 11 of these patient's prescriptions respectively. Predominant errors were, in the subgroup of patient's prescriptions with "clinical" errors, pD-DI; in the subgroup of "both" errors, lack of information on dosing interval and pD-DI; and in the "filling" subgroup, lack of information on dosing interval. The pD-DI caused 50 % of the errors detected, from which 19 % were of major severity. The impact of the correction of errors post-intervention was observed in 19 % of patients who had erroneous prescriptions before the intervention of the pharmacist (49.3-30.3 %, p < 0.05). The impact of the intervention was relevant from a clinical point of view for the public health services in Mexico. The implementation of early warning systems of the most widely prescribed drugs is an alternative for reducing prescription errors and consequently the risks they may cause.
Interventions to reduce medication errors in neonatal care: a systematic review
Nguyen, Minh-Nha Rhylie; Mosel, Cassandra
2017-01-01
Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337
[Building questions in forensic medicine and their logical basis].
Kovalev, D; Shmarov, K; Ten'kov, D
2015-01-01
The authors characterize in brief the requirements to the correct formulation of the questions posed to forensic medical experts with special reference to the mistakes made in building the questions and the ways to avoid them. This article actually continues the series of publications of the authors concerned with the major logical errors encountered in expert conclusions. Further publications will be dedicated to the results of the in-depth analysis of the logical errors contained in the questions posed to forensic medical experts and encountered in the expert conclusions.
Curated eutherian third party data gene data sets.
Premzl, Marko
2016-03-01
The free available eutherian genomic sequence data sets advanced scientific field of genomics. Of note, future revisions of gene data sets were expected, due to incompleteness of public eutherian genomic sequence assemblies and potential genomic sequence errors. The eutherian comparative genomic analysis protocol was proposed as guidance in protection against potential genomic sequence errors in public eutherian genomic sequences. The protocol was applicable in updates of 7 major eutherian gene data sets, including 812 complete coding sequences deposited in European Nucleotide Archive as curated third party data gene data sets.
Simulation of the stress computation in shells
NASA Technical Reports Server (NTRS)
Salama, M.; Utku, S.
1978-01-01
A self-teaching computer program is described, whereby the stresses in thin shells can be computed with good accuracy using the best fit approach. The program is designed for use in interactive game mode to allow the structural engineer to learn about (1) the major sources of difficulties and associated errors in the computation of stresses in thin shells, (2) possible ways to reduce the errors, and (3) trade-off between computational cost and accuracy. Included are derivation of the computational approach, program description, and several examples illustrating the program usage.
Investigations of interpolation errors of angle encoders for high precision angle metrology
NASA Astrophysics Data System (ADS)
Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa
2018-06-01
Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Mansy, Alzahraa
2015-09-01
Even with early surgical repair, the majority of cleft palate children demonstrate articulation errors and have typical cleft palate speech. Was to determine the nature of articulation errors of Arabic consonants in Egyptian Arabic-speaking children with velopharyngeal insufficiency (VPI). Thirty Egyptian Arabic-speaking children with VPI due to cleft palate (whether primary repaired or secondary repaired) were studied. Auditory perceptual assessment (APA) of children speech was conducted. Nasopharyngoscopy was done to assess the velopharyngeal port (VPP) movements while the child was repeating speech tasks. Mansoura Arabic Articulation test (MAAT) was performed to analyze the consonants articulation of these children. The most frequent type of articulatory errors observed was substitution, more specifically, backing. Pharyngealization of anterior fricatives was the most frequent substitution, especially for the /s/ sound. The most frequent substituting sounds for other sounds were /ʔ/ followed by /k/ and /n/ sounds. Significant correlations were found between the degrees of the open nasality and VPP closure and the articulation errors. On the other hand, the sounds (/ʔ/,/ħ/,/ʕ/,/n/,/w/,/j/) were normally articulated in all studied group. The determination of articulation errors in VPI children could guide the therapists for designing appropriate speech therapy programs for these cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
[A study of refractive errors in a primary school in Cotonou, Benin].
Sounouvou, I; Tchabi, S; Doutetien, C; Sonon, F; Yehouessi, L; Bassabi, S K
2008-10-01
Determine the epidemiologic aspects and the degree of severity of different refractive errors in primary schoolchildren. A prospective and descriptive study was conducted from 1 December 2005 to 31 March 2006 on schoolchildren ranging from 4 to 16 years of age in a public primary school in Cotonou, Benin. The refraction was evaluated for any visual acuity lower than or equal to 0.7. The study included 1057 schoolchildren. The average age of the study population was 8.5+/-2.6 years with a slight predominance of females (51.8%). The prevalence of refractive error was 10.6% and astigmatism accounted for the most frequent refractive anomaly (91.9%). Myopia and the hyperopia were associated with astigmatism in 29.4% and 16.1% of the cases, respectively. The age bracket from 6 to 11 years accounted for the majority of refractive errors (75.9%), without age and sex being risk factors (p=0.811 and p=0.321, respectively). The average vision of the ametropic eye was 0.61, with a clear predominance of slight refractive errors (89.3%) and particularly of low-level simple astigmatism (45.5%). The relatively low prevalence of refractive error observed does not obviate the need for implementing actions to improve the ocular health of schoolchildren.
Matsumoto, Shokei; Jung, Kyoungwon; Smith, Alan; Coimbra, Raul
2018-06-23
To establish the preventable and potentially preventable death rates in a mature trauma center and to identify the causes of death and highlight the lessons learned from these cases. We analyzed data from a Level-1 Trauma Center Registry, collected over a 15-year period. Data on demographics, timing of death, and potential errors were collected. Deaths were judged as preventable (PD), potentially preventable (PPD), or non-preventable (NPD), following a strict external peer-review process. During the 15-year period, there were 874 deaths, 15 (1.7%) and 6 (0.7%) of which were considered PPDs and PDs, respectively. Patients in the PD and PPD groups were not sicker and had less severe head injury than those in the NPD group. The time-death distribution differed according to preventability. We identified 21 errors in the PD and PPD groups, but only 61 (7.3%) errors in the NPD group (n = 853). Errors in judgement accounted for the majority and for 90.5% of the PD and PPD group errors. Although the numbers of PDs and PPDs were low, denoting maturity of our trauma center, there are important lessons to be learned about how errors in judgment led to deaths that could have been prevented.
Study of style effects on OCR errors in the MEDLINE database
NASA Astrophysics Data System (ADS)
Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.
2005-01-01
The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Scientific Impacts of Wind Direction Errors
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert
2004-01-01
An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost
EPs welcome new focus on reducing diagnostic errors.
2015-12-01
Emergency medicine leaders welcome a major new report from the Institute of Medicine (IOM) calling on providers, policy makers, and government agencies to institute changes to reduce the incidence of diagnostic errors. The 369-page report, "Improving Diagnosis in Health Care," states that the rate of diagnostic errors in this country is unacceptably high and offers a long list of recommendations aimed at addressing the problem. These include large, systemic changes that involve improvements in multiple areas, including health information technology (HIT), professional education, teamwork, and payment reform. Further, of particular interest to emergency physicians are recommended changes to the liability system. The authors of the IOM report state that while most people will likely experience a significant diagnostic error in their lifetime, the importance of this problem is under-appreciated. According to conservative estimates, the report says 5% of adults who seek outpatient care each year experience a diagnostic error. The report also notes that research over many decades shows diagnostic errors contribute to roughly 10% of all.deaths. The report says more steps need to be taken to facilitate inter-professional and intra-professional teamwork throughout the diagnostic process. Experts concur with the report's finding that mechanisms need to be developed so that providers receive ongoing feedback on their diagnostic performance.
ERIC Educational Resources Information Center
Bieron, Joseph F.; Dinan, Frank J.
2000-01-01
Presents a chemistry report on methamphetamine synthesis downloaded from the Internet and asks students to point out errors and answer questions about the text. Discusses the methods of methamphetamine synthesis and major issues in writing a case study. (YDS)
Organization Development-What Is It? A Brief Overview.
ERIC Educational Resources Information Center
Hiett, Alyson D.; And Others
1989-01-01
Provides basic knowledge about organizational development (OD). Includes basic strategies of OD; definitions and themes of OD interventions; major types of OD interventions; errors in systems approach to OD; and role of OD consultant. (Author/ABL)
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Image processing and analysis using neural networks for optometry area
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-11-01
In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.
Mohamed, Dhibi; Lotfi, Belkacem
2016-12-01
In this study, the Manchester Driver Behaviour Questionnaire (DBQ) was used to examine the self-reported driving behaviours of a group of Tunisian drivers (N = 900) and to collect socio-demographic data, driver behaviours and DBQ items. A sample of Tunisian drivers above 18 years was selected. The aim of the present study was to investigate the factorial structure of the DBQ in Tunisia. The principal component analysis identified three factor solutions: inattention errors, dangerous errors and dangerous violations. Logistic regression analysis showed that dangerous errors, dangerous violations and speeding preference factors predicted crash involvement in Tunisia. Speeding is the most common form of aberrant behaviour reported by drivers in the current sample. It remains one of the major road safety concerns.
Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry
2018-01-19
The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
Charles, Krista; Cannon, Margaret; Hall, Robert; Coustasse, Alberto
2014-01-01
Computerized provider order entry (CPOE) systems allow physicians to prescribe patient services electronically. In hospitals, CPOE essentially eliminates the need for handwritten paper orders and achieves cost savings through increased efficiency. The purpose of this research study was to examine the benefits of and barriers to CPOE adoption in hospitals to determine the effects on medical errors and adverse drug events (ADEs) and examine cost and savings associated with the implementation of this newly mandated technology. This study followed a methodology using the basic principles of a systematic review and referenced 50 sources. CPOE systems in hospitals were found to be capable of reducing medical errors and ADEs, especially when CPOE systems are bundled with clinical decision support systems designed to alert physicians and other healthcare providers of pending lab or medical errors. However, CPOE systems face major barriers associated with adoption in a hospital system, mainly high implementation costs and physicians' resistance to change.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
Eliminating US hospital medical errors.
Kumar, Sameer; Steinebach, Marc
2008-01-01
Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.
Anderson, N G; Jolley, I J; Wells, J E
2007-08-01
To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.
Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee
2014-01-01
As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. To assess the accuracy and feasibility of screening by teachers. A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs.
Shanks, Leslie; Bil, Karla; Fernhout, Jena
2015-01-01
Objective To analyse the results from the first 3 years of implementation of a medical error reporting system in Médecins Sans Frontières-Operational Centre Amsterdam (MSF) programs. Methodology A medical error reporting policy was developed with input from frontline workers and introduced to the organisation in June 2010. The definition of medical error used was “the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.” All confirmed error reports were entered into a database without the use of personal identifiers. Results 179 errors were reported from 38 projects in 18 countries over the period of June 2010 to May 2013. The rate of reporting was 31, 42, and 106 incidents/year for reporting year 1, 2 and 3 respectively. The majority of errors were categorized as dispensing errors (62 cases or 34.6%), errors or delays in diagnosis (24 cases or 13.4%) and inappropriate treatment (19 cases or 10.6%). The impact of the error was categorized as no harm (58, 32.4%), harm (70, 39.1%), death (42, 23.5%) and unknown in 9 (5.0%) reports. Disclosure to the patient took place in 34 cases (19.0%), did not take place in 46 (25.7%), was not applicable for 5 (2.8%) cases and not reported for 94 (52.5%). Remedial actions introduced at headquarters level included guideline revisions and changes to medical supply procedures. At field level improvements included increased training and supervision, adjustments in staffing levels, and adaptations to the organization of the pharmacy. Conclusion It was feasible to implement a voluntary reporting system for medical errors despite the complex contexts in which MSF intervenes. The reporting policy led to system changes that improved patient safety and accountability to patients. Challenges remain in achieving widespread acceptance of the policy as evidenced by the low reporting and disclosure rates. PMID:26381622
Chua, Siew-Siang; Choo, Sim-Mei; Sulaiman, Che Zuraini; Omar, Asma; Thong, Meow-Keong
2017-01-01
Background and purpose Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention) and Phase 2 (post-intervention). Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001). Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001). The most common types of errors were incorrect administration technique and incorrect drug preparation. Nasogastric and intravenous routes of drug administration contributed significantly to the rate of drug administration errors. Conclusion This study showed that sharing of the types of errors that had occurred was significantly associated with a reduction in drug administration errors. PMID:28356748
Association of Selected Intersection Factors with Red-Light-Running Crashes
DOT National Transportation Integrated Search
2000-05-01
Red-Light-Running (RLR) crashes represent a significant safety problem that warrants attention. It can be hypothesized that the majority of these crashes result from inadvertent driver error or intentional violation. However, very little is known abo...
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Althomali, Talal A
2018-01-01
Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes.
Limited Area Predictability: Is There A Limit To The Operational Usefulness of A Lam
NASA Astrophysics Data System (ADS)
Mesinger, F.
The issue of the limited area predictability in the context of the operational experience of the Eta Model, driven by the LBCs of the NCEP global spectral (Avn) model, is examined. The traditional view is that "the contamination at the lateral boundaries ... limits the operational usefulness of the LAM beyond some forecast time range". In the case of the Eta this contamination consists not only of the lower resolution of the Avn LBCs and the much discussed mathematical "lateral boundary error", but also of the use of the LBCs of the previous Avn run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. Looking for the signs of the Eta accuracy in relative terms falling behind that of the Avn we have examined the trend of the Eta vs Avn precipitation scores, the rms fits to raobs of the two models as a function of time, and the errors of these models at extended forecast times in placing the centers of major lows. In none of these efforts, some including forecasts out to 84 h, we were able to notice signs of the Eta accuracy being visibly affected by the inflow of the lateral boundary errors. It is therefore hypothesized that some of the Eta design features compensate for the increasing influence of the Avn LBC errors. Candidate features are discussed, with the eta coordinate being a contender to play a major role. This situation being possible for the pair of models discussed, existence of a general limit for the operational usefulness of a LAM seems questionable.
Dynamic characterization of Galfenol
NASA Astrophysics Data System (ADS)
Scheidler, Justin J.; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.
2015-04-01
A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.
Dynamic Characterization of Galfenol
NASA Technical Reports Server (NTRS)
Scheidler, Justin; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.
2015-01-01
A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.
Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.
Shi, Lu-Feng
2017-04-01
Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology
Regional climate modeling over the Maritime Continent: Assessment of RegCM3-BATS1e and RegCM3-IBIS
NASA Astrophysics Data System (ADS)
Gianotti, R. L.; Zhang, D.; Eltahir, E. A.
2010-12-01
Despite its importance to global rainfall and circulation processes, the Maritime Continent remains a region that is poorly simulated by climate models. Relatively few studies have been undertaken using a model with fine enough resolution to capture the small-scale spatial heterogeneity of this region and associated land-atmosphere interactions. These studies have shown that even regional climate models (RCMs) struggle to reproduce the climate of this region, particularly the diurnal cycle of rainfall. This study builds on previous work by undertaking a more thorough evaluation of RCM performance in simulating the timing and intensity of rainfall over the Maritime Continent, with identification of major sources of error. An assessment was conducted of the Regional Climate Model Version 3 (RegCM3) used in a coupled system with two land surface schemes: Biosphere Atmosphere Transfer System Version 1e (BATS1e) and Integrated Biosphere Simulator (IBIS). The model’s performance in simulating precipitation was evaluated against the 3-hourly TRMM 3B42 product, with some validation provided of this TRMM product against ground station meteorological data. It is found that the model suffers from three major errors in the rainfall histogram: underestimation of the frequency of dry periods, overestimation of the frequency of low intensity rainfall, and underestimation of the frequency of high intensity rainfall. Additionally, the model shows error in the timing of the diurnal rainfall peak, particularly over land surfaces. These four errors were largely insensitive to the choice of boundary conditions, convective parameterization scheme or land surface scheme. The presence of a wet or dry bias in the simulated volumes of rainfall was, however, dependent on the choice of convection scheme and boundary conditions. This study also showed that the coupled model system has significant error in overestimation of latent heat flux and evapotranspiration from the land surface, and specifically overestimation of interception loss with concurrent underestimation of transpiration, irrespective of the land surface scheme used. Discussion of the origin of these errors is provided, with some suggestions for improvement.
Common errors in multidrug-resistant tuberculosis management.
Monedero, Ignacio; Caminero, Jose A
2014-02-01
Multidrug-resistant tuberculosis (MDR-TB), defined as being resistant to at least rifampicin and isoniazid, has an increasing burden and threatens TB control. Diagnosis is limited and usually delayed while treatment is long lasting, toxic and poorly effective. MDR-TB management in scarce-resource settings is demanding however it is feasible and extremely necessary. In these settings, cure rates do not usually exceed 60-70% and MDR-TB management is novel for many TB programs. In this challenging scenario, both clinical and programmatic errors are likely to occur. The majority of these errors may be prevented or alleviated with appropriate and timely training in addition to uninterrupted procurement of high-quality drugs, updated national guidelines and laws and an overall improvement in management capacities. While new tools for diagnosis and shorter and less toxic treatment are not available in developing countries, MDR-TB management will remain complex in scarce resource settings. Focusing special attention on the common errors in diagnosis, regimen design and especially treatment delivery may benefit patients and programs with current outdated tools. The present article is a compilation of typical errors repeatedly observed by the authors in a wide range of countries during technical assistant missions and trainings.
Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems
Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang
2015-01-01
The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Bakic, Jasmina; Pourtois, Gilles; Jepma, Marieke; Duprat, Romain; De Raedt, Rudi; Baeken, Chris
2017-01-01
Major depressive disorder (MDD) creates debilitating effects on a wide range of cognitive functions, including reinforcement learning (RL). In this study, we sought to assess whether reward processing as such, or alternatively the complex interplay between motivation and reward might potentially account for the abnormal reward-based learning in MDD. A total of 35 treatment resistant MDD patients and 44 age matched healthy controls (HCs) performed a standard probabilistic learning task. RL was titrated using behavioral, computational modeling and event-related brain potentials (ERPs) data. MDD patients showed comparable learning rate compared to HCs. However, they showed decreased lose-shift responses as well as blunted subjective evaluations of the reinforcers used during the task, relative to HCs. Moreover, MDD patients showed normal internal (at the level of error-related negativity, ERN) but abnormal external (at the level of feedback-related negativity, FRN) reward prediction error (RPE) signals during RL, selectively when additional efforts had to be made to establish learning. Collectively, these results lend support to the assumption that MDD does not impair reward processing per se during RL. Instead, it seems to alter the processing of the emotional value of (external) reinforcers during RL, when additional intrinsic motivational processes have to be engaged. © 2016 Wiley Periodicals, Inc.
Liu, Danyang; Gan, Rongchang; Zhang, Weidi; Wang, Wei; Saiyin, Hexige; Zeng, Wenjiao; Liu, Guoyuan
2018-01-01
Emergency medicine is a 'high risk' specialty. Some diseases develop suddenly and progress rapidly, and sudden unexpected deaths in the emergency department (ED) may cause medical disputes. We aimed to assess discrepancies between antemortem clinical diagnoses and postmortem autopsy findings concerning emergency medicine dispute cases and to figure out the most common major missed diagnoses. Clinical files and autopsy reports were retrospectively analysed and interpreted. Discrepancies between clinical diagnoses and autopsy diagnoses were evaluated using modified Goldman classification as major and minor discrepancy. The difference between diagnosis groups was compared with Pearson χ 2 test. Of the 117 cases included in this study, 71 of cases (58 class I and 13 class II diagnostic errors) were revealed as major discrepancies (60.7%). The most common major diagnoses were cardiovascular diseases (54 cases), followed by pulmonary diseases, infectious diseases and so on. The difference of major discrepancy between the diagnoses groups was significant (p<0.001). Aortic dissection and myocardial infarction were the most common cause of death (15 cases for each disease) and the most common missed class I diagnoses (80% and 66.7% for each), higher than the average 49.6% of all class I errors of the study patients. High major disparities between clinical diagnoses and postmortem examinations exist in emergency medical disputes cases; acute aortic dissection and myocardial infarction are the most frequently major missed diagnoses that ED clinicians should pay special attention to in practice. This study reaffirmed the necessity and usefulness of autopsy in auditing death in EDs. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Evaluating the technique of using inhalation device in COPD and bronchial asthma patients.
Arora, Piyush; Kumar, Lokender; Vohra, Vikram; Sarin, Rohit; Jaiswal, Anand; Puri, M M; Rathee, Deepti; Chakraborty, Pitambar
2014-07-01
In asthma management, poor handling of inhalation devices and wrong inhalation technique are associated with decreased medication delivery and poor disease control. The key to overcome the drawbacks in inhalation technique is to make patients familiar with issues related to correct use and performance of these medical devices. The objective of this study was to evaluate and analyse technique of use of the inhalation device used by patients of COPD and Bronchial Asthma. A total of 300 cases of BA or COPD patients using different types of inhalation devices were included in this observational study. Data were captured using a proforma and were analysed using SPSS version 15.0. Out of total 300 enrolled patients, 247 (82.3%) made at least one error. Maximum errors observed in subjects using MDI (94.3%), followed by DPI (82.3%), MDI with Spacer (78%) while Nebulizer users (70%) made least number of errors (p = 0.005). Illiterate patients showed 95.2% error while post-graduate and professionals showed 33.3%. This difference was statistically significant (p < 0.001). Self-educated patients committed 100% error, while those trained by a doctor made 56.3% error. Majority of patients using inhalation devices made errors while using the device. Proper education to patients on correct usage may not only improve control of the symptoms of the disease but might also allow dose reduction in long term. Copyright © 2014 Elsevier Ltd. All rights reserved.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Undergraduate paramedic students cannot do drug calculations.
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
Roger, Andrew J; Hug, Laura A
2006-01-01
Determining the relationships among and divergence times for the major eukaryotic lineages remains one of the most important and controversial outstanding problems in evolutionary biology. The sequencing and phylogenetic analyses of ribosomal RNA (rRNA) genes led to the first nearly comprehensive phylogenies of eukaryotes in the late 1980s, and supported a view where cellular complexity was acquired during the divergence of extant unicellular eukaryote lineages. More recently, however, refinements in analytical methods coupled with the availability of many additional genes for phylogenetic analysis showed that much of the deep structure of early rRNA trees was artefactual. Recent phylogenetic analyses of a multiple genes and the discovery of important molecular and ultrastructural phylogenetic characters have resolved eukaryotic diversity into six major hypothetical groups. Yet relationships among these groups remain poorly understood because of saturation of sequence changes on the billion-year time-scale, possible rapid radiations of major lineages, phylogenetic artefacts and endosymbiotic or lateral gene transfer among eukaryotes. Estimating the divergence dates between the major eukaryote lineages using molecular analyses is even more difficult than phylogenetic estimation. Error in such analyses comes from a myriad of sources including: (i) calibration fossil dates, (ii) the assumed phylogenetic tree, (iii) the nucleotide or amino acid substitution model, (iv) substitution number (branch length) estimates, (v) the model of how rates of evolution change over the tree, (vi) error inherent in the time estimates for a given model and (vii) how multiple gene data are treated. By reanalysing datasets from recently published molecular clock studies, we show that when errors from these various sources are properly accounted for, the confidence intervals on inferred dates can be very large. Furthermore, estimated dates of divergence vary hugely depending on the methods used and their assumptions. Accurate dating of divergence times among the major eukaryote lineages will require a robust tree of eukaryotes, a much richer Proterozoic fossil record of microbial eukaryotes assignable to extant groups for calibration, more sophisticated relaxed molecular clock methods and many more genes sampled from the full diversity of microbial eukaryotes. PMID:16754613
Burillo, Almudena; Rodríguez-Sánchez, Belén; Ramiro, Ana; Cercenado, Emilia; Rodríguez-Créixems, Marta; Bouza, Emilio
2014-01-01
Microbiological confirmation of a urinary tract infection (UTI) takes 24-48 h. In the meantime, patients are usually given empirical antibiotics, sometimes inappropriately. We assessed the feasibility of sequentially performing a Gram stain and MALDI-TOF MS mass spectrometry (MS) on urine samples to anticipate clinically useful information. In May-June 2012, we randomly selected 1000 urine samples from patients with suspected UTI. All were Gram stained and those yielding bacteria of a single morphotype were processed for MALDI-TOF MS. Our sequential algorithm was correlated with the standard semiquantitative urine culture result as follows: Match, the information provided was anticipative of culture result; Minor error, the information provided was partially anticipative of culture result; Major error, the information provided was incorrect, potentially leading to inappropriate changes in antimicrobial therapy. A positive culture was obtained in 242/1000 samples. The Gram stain revealed a single morphotype in 207 samples, which were subjected to MALDI-TOF MS. The diagnostic performance of the Gram stain was: sensitivity (Se) 81.3%, specificity (Sp) 93.2%, positive predictive value (PPV) 81.3%, negative predictive value (NPV) 93.2%, positive likelihood ratio (+LR) 11.91, negative likelihood ratio (-LR) 0.20 and accuracy 90.0% while that of MALDI-TOF MS was: Se 79.2%, Sp 73.5, +LR 2.99, -LR 0.28 and accuracy 78.3%. The use of both techniques provided information anticipative of the culture result in 82.7% of cases, information with minor errors in 13.4% and information with major errors in 3.9%. Results were available within 1 h. Our serial algorithm provided information that was consistent or showed minor errors for 96.1% of urine samples from patients with suspected UTI. The clinical impacts of this rapid UTI diagnosis strategy need to be assessed through indicators of adequacy of treatment such as a reduced time to appropriate empirical treatment or earlier withdrawal of unnecessary antibiotics.
Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?
Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.
2016-01-01
Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
Tomasino, Barbara; Marin, Dario; Maieron, Marta; D'Agostini, Serena; Fabbro, Franco; Skrap, Miran; Luzzatti, Claudio
2015-12-01
Neuropsychological data about acquired impairments in reading and writing provide a strong basis for the theoretical framework of the dual-route models. The present study explored the functional neuroanatomy of the reading and spelling processing system. We describe the reading and writing performance of patient CF, an Italian native speaker who developed an extremely selective reading and spelling deficit (his spontaneous speech, oral comprehension, repetition and oral picture naming were almost unimpaired) in processing double letters associated with surface dyslexia and dysgraphia, following a tumor in the left temporal lobe. In particular, the majority of CF's errors in spelling were phonologically plausible substitutions, errors concerning letter numerosity of consonants, and syllabic phoneme-to-grapheme conversion (PGC) errors. A similar pattern of impairment also emerged in his reading behavior, with a majority of lexical stress errors (the only possible type of surface reading errors in the Italian language, due the extreme regularity of print-to-sound correspondence). CF's neuropsychological profile was combined with structural neuroimaging data, fiber tracking, and functional maps and compared to that of healthy control participants. We related CF's deficit to a dissociation between impaired ventral/lexical route (as evidenced by a fractional anisotropy - FA decrease along the inferior fronto-occipital fasciculus - IFOF) and relatively preserved dorsal/phonological route (as evidenced by a rather full integrity of the superior longitudinal fasciculus - SLF). In terms of functional processing, the lexical-semantic ventral route network was more activated in controls than in CF, while the network supporting the dorsal route was shared by CF and the control participants. Our results are discussed within the theoretical framework of dual-route models of reading and spelling, emphasize the importance of the IFOF both in lexical reading and spelling, and offer a better comprehension of the neurological and functional substrates involved in written language and, in particular, in surface dyslexia and dysgraphia and in doubling/de-doubling consonant sounds and letters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
Scaling fixed-field alternating gradient accelerators with a small orbit excursion.
Machida, Shinji
2009-10-16
A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.
Keefer, Patricia; Kidwell, Kelley; Lengyel, Candice; Warrier, Kavita; Wagner, Deborah
2017-01-01
Voluntary medication error reporting is an imperfect resource used to improve the quality of medication administration. It requires judgment by front-line staff to determine how to report enough to identify opportunities to improve patients' safety but not jeopardize that safety by creating a culture of "report fatigue." This study aims to provide information on interpretability of medication error and the variability between the subgroups of caregivers in the hospital setting. Survey participants included nursing, physician (trainee and graduated), patient/families, pharmacist across a large academic health system, including an attached free-standing pediatric hospital. Demographics and survey questions were collected and analyzed using Fischer's exact testing with SAS v9.3. Statistically significant variability existed between the four groups for a majority of the questions. This included all cases designated as administration errors and many, but not all, cases of prescribing events. Commentary provided in the free-text portion of the survey was sub-analyzed and found to be associated with medication allergy reporting and lack of education surrounding report characteristics. There is significant variability in the threshold to report specific medication errors in the hospital setting. More work needs to be done to further improve the education surrounding error reporting in hospitals for all noted subgroups. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
Blood specimen labelling errors: Implications for nephrology nursing practice.
Duteau, Jennifer
2014-01-01
Patient safety is the foundation of high-quality health care, as recognized both nationally and worldwide. Patient blood specimen identification is critical in ensuring the delivery of safe and appropriate care. The practice of nephrology nursing involves frequent patient blood specimen withdrawals to treat and monitor kidney disease. A critical review of the literature reveals that incorrect patient identification is one of the major causes of blood specimen labelling errors. Misidentified samples create a serious risk to patient safety leading to multiple specimen withdrawals, delay in diagnosis, misdiagnosis, incorrect treatment, transfusion reactions, increased length of stay and other negative patient outcomes. Barcode technology has been identified as a preferred method for positive patient identification leading to a definitive decrease in blood specimen labelling errors by as much as 83% (Askeland, et al., 2008). The use of a root cause analysis followed by an action plan is one approach to decreasing the occurrence of blood specimen labelling errors. This article will present a review of the evidence-based literature surrounding blood specimen labelling errors, followed by author recommendations for completing a root cause analysis and action plan. A failure modes and effects analysis (FMEA) will be presented as one method to determine root cause, followed by the Ottawa Model of Research Use (OMRU) as a framework for implementation of strategies to reduce blood specimen labelling errors.
Haghighi, Mohammad Hosein Hayavi; Dehghani, Mohammad; Teshnizi, Saeid Hoseini; Mahmoodi, Hamid
2014-01-01
Accurate cause of death coding leads to organised and usable death information but there are some factors that influence documentation on death certificates and therefore affect the coding. We reviewed the role of documentation errors on the accuracy of death coding at Shahid Mohammadi Hospital (SMH), Bandar Abbas, Iran. We studied the death certificates of all deceased patients in SMH from October 2010 to March 2011. Researchers determined and coded the underlying cause of death on the death certificates according to the guidelines issued by the World Health Organization in Volume 2 of the International Statistical Classification of Diseases and Health Related Problems-10th revision (ICD-10). Necessary ICD coding rules (such as the General Principle, Rules 1-3, the modification rules and other instructions about death coding) were applied to select the underlying cause of death on each certificate. Demographic details and documentation errors were then extracted. Data were analysed with descriptive statistics and chi square tests. The accuracy rate of causes of death coding was 51.7%, demonstrating a statistically significant relationship (p=.001) with major errors but not such a relationship with minor errors. Factors that result in poor quality of Cause of Death coding in SMH are lack of coder training, documentation errors and the undesirable structure of death certificates.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R
2018-04-10
Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Suzanne B., E-mail: Suzannne.evans@yale.edu; Yu, James B.; Chagpar, Anees
2012-10-01
Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses:more » 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.« less
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, A; Nyflot, M; Sponseller, P
2014-06-01
Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Software Prototyping: Designing Systems for Users.
ERIC Educational Resources Information Center
Spies, Phyllis Bova
1983-01-01
Reports on major change in computer software development process--the prototype model, i.e., implementation of skeletal system that is enhanced during interaction with users. Expensive and unreliable software, software design errors, traditional development approach, resources required for prototyping, success stories, and systems designer's role…
Henneman, Elizabeth A
2017-07-01
The Institute of Medicine (now National Academy of Medicine) reports "To Err is Human" and "Crossing the Chasm" made explicit 3 previously unappreciated realities: (1) Medical errors are common and result in serious, preventable adverse events; (2) The majority of medical errors are the result of system versus human failures; and (3) It would be impossible for any system to prevent all errors. With these realities, the role of the nurse in the "near miss" process and as the final safety net for the patient is of paramount importance. The nurse's role in patient safety is described from both a systems perspective and a human factors perspective. Critical care nurses use specific strategies to identify, interrupt, and correct medical errors. Strategies to identify errors include knowing the patient, knowing the plan of care, double-checking, and surveillance. Nursing strategies to interrupt errors include offering assistance, clarifying, and verbally interrupting. Nurses correct errors by persevering, being physically present, reviewing/confirming the plan of care, or involving another nurse or physician. Each of these strategies has implications for education, practice, and research. Surveillance is a key nursing strategy for identifying medical errors and reducing adverse events. Eye-tracking technology is a novel approach for evaluating the surveillance process during common, high-risk processes such as blood transfusion and medication administration. Eye tracking has also been used to examine the impact of interruptions to care caused by bedside alarms as well as by other health care personnel. Findings from this safety-related eye-tracking research provide new insight into effective bedside surveillance and interruption management strategies. ©2017 American Association of Critical-Care Nurses.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors.
Althomali, Talal A.
2018-01-01
Background: Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. Purpose: To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. Methods: The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Outcome Measures: Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. Results: The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Conclusion and Relevance: Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes. PMID:29872484
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors. PMID:28674608
Gilbar, Peter; Chambers, Carole R; Larizza, Maria
2015-02-01
The risk of medication errors with vincristine administration is well documented. Our objective was to ascertain how vincristine is administered worldwide and determine what strategies for preventing the accidental intrathecal administration of vincristine are in place. A survey, comprising 28 questions, was distributed to 363 International Society of Oncology Pharmacy Practitioners members from 42 countries via email. Questions were asked on methods of vincristine administration, intrathecal drug administration and strategies used to prevent medication errors. A reminder was sent and the survey was available on the International Society of Oncology Pharmacy Practitioners website. Only one survey per institution was requested. In all, 62 responses from 15 countries were received, with the majority from Australia. Vincristine was dispensed in mini-bags in 77.4% of centres, though some also used syringes. Syringes were used in 31.1% of centres, with half these doses prepared undiluted. Administration took 5 to 15 minutes in most centres (78.8%). The most common reasons for still using syringes were perceived risk of extravasation and faster infusion time. Despite numerous vincristine administrations, extravasation was very rare. Other recommended strategies for error prevention were in use in the majority of centres. Comparisons with three previous surveys are difficult as the majority of respondents in those studies were from the USA. A number of areas appear to have improved, particularly the preparation of vincristine in mini-bags, but they are far from perfect. Deaths continue to occur following accidental intrathecal administration of vincristine. International Society of Oncology Pharmacy Practitioner members are urged to lead the way in incorporating strategies for prevention into institutions worldwide. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Patient safety culture among nurses.
Ammouri, A A; Tailakh, A K; Muliira, J K; Geethakrishnan, R; Al Kindi, S N
2015-03-01
Patient safety is considered to be crucial to healthcare quality and is one of the major parameters monitored by all healthcare organizations around the world. Nurses play a vital role in maintaining and promoting patient safety due to the nature of their work. The purpose of this study was to investigate nurses' perceptions about patient safety culture and to identify the factors that need to be emphasized in order to develop and maintain the culture of safety among nurses in Oman. A descriptive and cross-sectional design was used. Patient safety culture was assessed by using the Hospital Survey on Patient Safety Culture among 414 registered nurses working in four major governmental hospitals in Oman. Descriptive statistics and general linear regression were employed to assess the association between patient safety culture and demographic variables. Nurses who perceived more supervisor or manager expectations, feedback and communications about errors, teamwork across hospital units, and hospital handoffs and transitions had more overall perception of patient safety. Nurses who perceived more teamwork within units and more feedback and communications about errors had more frequency of events reported. Furthermore, nurses who had more years of experience and were working in teaching hospitals had more perception of patient safety culture. Learning and continuous improvement, hospital management support, supervisor/manager expectations, feedback and communications about error, teamwork, hospital handoffs and transitions were found to be major patient safety culture predictors. Investing in practices and systems that focus on improving these aspects is likely to enhance the culture of patient safety in Omani hospitals and others like them. Strategies to nurture patient safety culture in Omani hospitals should focus upon building leadership capacity that support open communication, blame free, team work and continuous organizational learning. © 2014 International Council of Nurses.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
System identification for modeling for control of flexible structures
NASA Technical Reports Server (NTRS)
Mettler, Edward; Milman, Mark
1986-01-01
The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Tracking Progress in Improving Diagnosis: A Framework for Defining Undesirable Diagnostic Events.
Olson, Andrew P J; Graber, Mark L; Singh, Hardeep
2018-01-29
Diagnostic error is a prevalent, harmful, and costly phenomenon. Multiple national health care and governmental organizations have recently identified the need to improve diagnostic safety as a high priority. A major barrier, however, is the lack of standardized, reliable methods for measuring diagnostic safety. Given the absence of reliable and valid measures for diagnostic errors, we need methods to help establish some type of baseline diagnostic performance across health systems, as well as to enable researchers and health systems to determine the impact of interventions for improving the diagnostic process. Multiple approaches have been suggested but none widely adopted. We propose a new framework for identifying "undesirable diagnostic events" (UDEs) that health systems, professional organizations, and researchers could further define and develop to enable standardized measurement and reporting related to diagnostic safety. We propose an outline for UDEs that identifies both conditions prone to diagnostic error and the contexts of care in which these errors are likely to occur. Refinement and adoption of this framework across health systems can facilitate standardized measurement and reporting of diagnostic safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Louise Marie
2007-09-01
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less
Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C
2018-02-19
The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.
Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P
2002-01-01
An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
The relevance of error analysis in graphical symbols evaluation.
Piamonte, D P
1999-01-01
In an increasing number of modern tools and devices, small graphical symbols appear simultaneously in sets as parts of the human-machine interfaces. The presence of each symbol can influence the other's recognizability and correct association to its intended referents. Thus, aside from correct associations, it is equally important to perform certain error analysis of the wrong answers, misses, confusions, and even lack of answers. This research aimed to show how such error analyses could be valuable in evaluating graphical symbols especially across potentially different user groups. The study tested 3 sets of icons representing 7 videophone functions. The methods involved parameters such as hits, confusions, missing values, and misses. The association tests showed similar hit rates of most symbols across the majority of the participant groups. However, exploring the error patterns helped detect differences in the graphical symbols' performances between participant groups, which otherwise seemed to have similar levels of recognition. These are very valuable not only in determining the symbols to be retained, replaced or re-designed, but also in formulating instructions and other aids in learning to use new products faster and more satisfactorily.
Species-area relationships and extinction forecasts.
Halley, John M; Sgardeli, Vasiliki; Monokrousos, Nikolaos
2013-05-01
The species-area relationship (SAR) predicts that smaller areas contain fewer species. This is the basis of the SAR method that has been used to forecast large numbers of species committed to extinction every year due to deforestation. The method has a number of issues that must be handled with care to avoid error. These include the functional form of the SAR, the choice of equation parameters, the sampling procedure used, extinction debt, and forest regeneration. Concerns about the accuracy of the SAR technique often cite errors not much larger than the natural scatter of the SAR itself. Such errors do not undermine the credibility of forecasts predicting large numbers of extinctions, although they may be a serious obstacle in other SAR applications. Very large errors can arise from misinterpretation of extinction debt, inappropriate functional form, and ignoring forest regeneration. Major challenges remain to understand better the relationship between sampling protocol and the functional form of SARs and the dynamics of relaxation, especially in continental areas, and to widen the testing of extinction forecasts. © 2013 New York Academy of Sciences.
Wilf-Miron, R; Lewenhoff, I; Benyamini, Z; Aviram, A
2003-02-01
The development of a medical risk management programme based on the aviation safety approach and its implementation in a large ambulatory healthcare organisation is described. The following key safety principles were applied: (1). errors inevitably occur and usually derive from faulty system design, not from negligence; (2). accident prevention should be an ongoing process based on open and full reporting; (3). major accidents are only the "tip of the iceberg" of processes that indicate possibilities for organisational learning. Reporting physicians were granted immunity, which encouraged open reporting of errors. A telephone "hotline" served the medical staff for direct reporting and receipt of emotional support and medical guidance. Any adverse event which had learning potential was debriefed, while focusing on the human cause of error within a systemic context. Specific recommendations were formulated to rectify processes conducive to error when failures were identified. During the first 5 years of implementation, the aviation safety concept and tools were successfully adapted to ambulatory care, fostering a culture of greater concern for patient safety through risk management while providing support to the medical staff.
Wilf-Miron, R; Lewenhoff, I; Benyamini, Z; Aviram, A
2003-01-01
The development of a medical risk management programme based on the aviation safety approach and its implementation in a large ambulatory healthcare organisation is described. The following key safety principles were applied: (1) errors inevitably occur and usually derive from faulty system design, not from negligence; (2) accident prevention should be an ongoing process based on open and full reporting; (3) major accidents are only the "tip of the iceberg" of processes that indicate possibilities for organisational learning. Reporting physicians were granted immunity, which encouraged open reporting of errors. A telephone "hotline" served the medical staff for direct reporting and receipt of emotional support and medical guidance. Any adverse event which had learning potential was debriefed, while focusing on the human cause of error within a systemic context. Specific recommendations were formulated to rectify processes conducive to error when failures were identified. During the first 5 years of implementation, the aviation safety concept and tools were successfully adapted to ambulatory care, fostering a culture of greater concern for patient safety through risk management while providing support to the medical staff. PMID:12571343
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
Overview of medical errors and adverse events
2012-01-01
Safety is a global concept that encompasses efficiency, security of care, reactivity of caregivers, and satisfaction of patients and relatives. Patient safety has emerged as a major target for healthcare improvement. Quality assurance is a complex task, and patients in the intensive care unit (ICU) are more likely than other hospitalized patients to experience medical errors, due to the complexity of their conditions, need for urgent interventions, and considerable workload fluctuation. Medication errors are the most common medical errors and can induce adverse events. Two approaches are available for evaluating and improving quality-of-care: the room-for-improvement model, in which problems are identified, plans are made to resolve them, and the results of the plans are measured; and the monitoring model, in which quality indicators are defined as relevant to potential problems and then monitored periodically. Indicators that reflect structures, processes, or outcomes have been developed by medical societies. Surveillance of these indicators is organized at the hospital or national level. Using a combination of methods improves the results. Errors are caused by combinations of human factors and system factors, and information must be obtained on how people make errors in the ICU environment. Preventive strategies are more likely to be effective if they rely on a system-based approach, in which organizational flaws are remedied, rather than a human-based approach of encouraging people not to make errors. The development of a safety culture in the ICU is crucial to effective prevention and should occur before the evaluation of safety programs, which are more likely to be effective when they involve bundles of measures. PMID:22339769
Undergraduate paramedic students cannot do drug calculations
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067
Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo
2011-03-01
Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.
Pezzetta, Rachele; Nicolardi, Valentina; Tidoni, Emmanuele; Aglioti, Salvatore Maria
2018-06-06
Detecting errors in one's own actions, and in the actions of others, is a crucial ability for adaptable and flexible behavior. Studies show that specific EEG signatures underpin the monitoring of observed erroneous actions (error-related negativity, error-positivity, mid-frontal theta oscillations). However, the majority of studies on action observation used sequences of trials where erroneous actions were less frequent than correct actions. Therefore, it was not possible to disentangle whether the activation of the performance monitoring system was due to an error - as a violation of the intended goal - or a surprise/novelty effect, associated with a rare and unexpected event. Combining EEG and immersive virtual reality (IVR-CAVE system), we recorded the neural signal of 25 young adults who observed in first-person perspective, simple reach-to-grasp actions performed by an avatar aiming for a glass. Importantly, the proportion of erroneous actions was higher than correct actions. Results showed that the observation of erroneous actions elicits the typical electro-cortical signatures of error monitoring and therefore the violation of the action goal is still perceived as a salient event. The observation of correct actions elicited stronger alpha suppression. This confirmed the role of the alpha frequency band in the general orienting response to novel and infrequent stimuli. Our data provides novel evidence that an observed goal error (the action slip) triggers the activity of the performance monitoring system even when erroneous actions, which are, typically, relevant events, occur more often than correct actions and thus are not salient because of their rarity.
Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.
Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine
2018-01-01
Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.
Roberts, Rachel M; Davis, Melissa C
2015-01-01
There is a need for an evidence-based approach to training professional psychologists in the administration and scoring of standardized tests such as the Wechsler Adult Intelligence Scale (WAIS) due to substantial evidence that these tasks are associated with numerous errors that have the potential to significantly impact clients' lives. Twenty three post-graduate psychology students underwent training in using the WAIS-IV according to a best-practice teaching model that involved didactic teaching, independent study of the test manual, and in-class practice with teacher supervision and feedback. Video recordings and test protocols from a role-played test administration were analyzed for errors according to a comprehensive checklist with self, peer, and faculty member reviews. 91.3% of students were rated as having demonstrated competency in administration and scoring. All students were found to make errors, with substantially more errors being detected by the faculty member than by self or peers. Across all subtests, the most frequent errors related to failure to deliver standardized instructions verbatim from the manual. The failure of peer and self-reviews to detect the majority of the errors suggests that novice feedback (self or peers) may be ineffective to eliminate errors and the use of more senior peers may be preferable. It is suggested that involving senior trainees, recent graduates and/or experienced practitioners in the training of post-graduate students may have benefits for both parties, promoting a peer-learning and continuous professional development approach to the development and maintenance of skills in psychological assessment.
Application of Statistics in Engineering Technology Programs
ERIC Educational Resources Information Center
Zhan, Wei; Fink, Rainer; Fang, Alex
2010-01-01
Statistics is a critical tool for robustness analysis, measurement system error analysis, test data analysis, probabilistic risk assessment, and many other fields in the engineering world. Traditionally, however, statistics is not extensively used in undergraduate engineering technology (ET) programs, resulting in a major disconnect from industry…
Patient identification in blood sampling.
Davidson, Anne; Bolton-Maggs, Paula
The majority of adverse reports relating to blood transfusions result from human error, including misidentification of patients and incorrect labelling of samples. This article outlines best practice in blood sampling for transfusion (but is recommended for all pathology samples) and the role of patient empowerment in improving safety.
Cophasing techniques for extremely large telescopes
NASA Astrophysics Data System (ADS)
Devaney, Nicholas; Schumacher, Achim
2004-07-01
The current designs of the majority of ELTs envisage that at least the primary mirror will be segmented. Phasing of the segments is therefore a major concern, and a lot of work is underway to determine the most suitable techniques. The techniques which have been developed are either wave optics generalizations of classical geometric optics tests (e.g. Shack-Hartmann and curvature sensing) or direct interferometric measurements. We present a review of the main techniques proposed for phasing and outline their relative merits. We consider problems which are specific to ELTs, e.g. vignetting of large parts of the primary mirror by the secondary mirror spiders, and the need to disentangle phase errors arising in different segmented mirrors. We present improvements in the Shack-Hartmann and curvature sensing techniques which allow greater precision and range. Finally, we describe a piston plate which simulates segment phasing errors and show the results of laboratory experiments carried out to verify the precision of the Shack-Hartmann technique.
Error Budgeting and Tolerancing of Starshades for Exoplanet Detection
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Noecker, M. Charley; Glassman, Tiffany; Lo, Amy S.; Dumont, Philip J.; Kasdin, N. Jeremy; Cady, Eric J.; Vanderbei, Robert; Lawson, Peter R.
2010-01-01
A flower-like starshade positioned between a star and a space telescope is an attractive option for blocking the starlight to reveal the faint reflected light of an orbiting Earth-like planet. Planet light passes around the petals and directly enters the telescope where it is seen along with a background of scattered light due to starshade imperfections. We list the major perturbations that are expected to impact the performance of a starshade system and show that independent models at NGAS and JPL yield nearly identical optical sensitivities. We give the major sensitivities in the image plane for a design consisting of a 34-m diameter starshade, and a 2-m diameter telescope separated by 39,000 km, operating between 0.25 and 0.55 um. These sensitivities include individual petal and global shape terms evaluated at the inner working angle. Following a discussion of the combination of individual perturbation terms, we then present an error budget that is consistent with detection of an Earth-like planet 26 magnitudes fainter than its host star.
Kupek, Emil
2002-01-01
Background Frequent use of self-reports for investigating recent and past behavior in medical research requires statistical techniques capable of analyzing complex sources of bias associated with this methodology. In particular, although decreasing accuracy of recalling more distant past events is commonplace, the bias due to differential in memory errors resulting from it has rarely been modeled statistically. Methods Covariance structure analysis was used to estimate the recall error of self-reported number of sexual partners for past periods of varying duration and its implication for the bias. Results Results indicated increasing levels of inaccuracy for reports about more distant past. Considerable positive bias was found for a small fraction of respondents who reported ten or more partners in the last year, last two years and last five years. This is consistent with the effect of heteroscedastic random error where the majority of partners had been acquired in the more distant past and therefore were recalled less accurately than the partners acquired more recently to the time of interviewing. Conclusions Memory errors of this type depend on the salience of the events recalled and are likely to be present in many areas of health research based on self-reported behavior. PMID:12435276
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Krueger, Joachim I; Funder, David C
2004-06-01
Mainstream social psychology focuses on how people characteristically violate norms of action through social misbehaviors such as conformity with false majority judgments, destructive obedience, and failures to help those in need. Likewise, they are seen to violate norms of reasoning through cognitive errors such as misuse of social information, self-enhancement, and an over-readiness to attribute dispositional characteristics. The causes of this negative research emphasis include the apparent informativeness of norm violation, the status of good behavior and judgment as unconfirmable null hypotheses, and the allure of counter-intuitive findings. The shortcomings of this orientation include frequently erroneous imputations of error, findings of mutually contradictory errors, incoherent interpretations of error, an inability to explain the sources of behavioral or cognitive achievement, and the inhibition of generalized theory. Possible remedies include increased attention to the complete range of behavior and judgmental accomplishment, analytic reforms emphasizing effect sizes and Bayesian inference, and a theoretical paradigm able to account for both the sources of accomplishment and of error. A more balanced social psychology would yield not only a more positive view of human nature, but also an improved understanding of the bases of good behavior and accurate judgment, coherent explanations of occasional lapses, and theoretically grounded suggestions for improvement.
An Interlaboratory Comparison of Dosimetry for a Multi-institutional Radiobiological
Seed, TM; Xiao, S; Manley, N; Nikolich-Zugich, J; Pugh, J; van den Brink, M; Hirabayashi, Y; Yasutomo, K; Iwama, A; Koyasu, S; Shterev, I; Sempowski, G; Macchiarini, F; Nakachi, K; Kunugi, KC; Hammer, CG; DeWerd, LA
2016-01-01
Purpose An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Methods Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. Results The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤ 5%. Comparable rates of ‘dosimetric compliance’ were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between ‘measured’ and ‘target’ doses, with errors falling largely between 0–20%. Outliers were most notable for OSL-based tests, while multiple tests by ‘non-compliant’ laboratories using orthovoltage x-rays contributed heavily to the wide variation in dosing errors. Conclusions For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized. PMID:26857121
Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A
2016-01-01
An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.
Prevalence of refractive errors in the European adult population: the Gutenberg Health Study (GHS).
Wolfram, Christian; Höhn, René; Kottler, Ulrike; Wild, Philipp; Blettner, Maria; Bühren, Jens; Pfeiffer, Norbert; Mirshahi, Alireza
2014-07-01
To study the distribution of refractive errors among adults of European descent. Population-based eye study in Germany with 15010 participants aged 35-74 years. The study participants underwent a detailed ophthalmic examination according to a standardised protocol. Refractive error was determined by an automatic refraction device (Humphrey HARK 599) without cycloplegia. Definitions for the analysis were myopia <-0.5 dioptres (D), hyperopia >+0.5 D, astigmatism >0.5 cylinder D and anisometropia >1.0 D difference in the spherical equivalent between the eyes. Exclusion criterion was previous cataract or refractive surgery. 13959 subjects were eligible. Refractive errors ranged from -21.5 to +13.88 D. Myopia was present in 35.1% of this study sample, hyperopia in 31.8%, astigmatism in 32.3% and anisometropia in 13.5%. The prevalence of myopia decreased, while the prevalence of hyperopia, astigmatism and anisometropia increased with age. 3.5% of the study sample had no refractive correction for their ametropia. Refractive errors affect the majority of the population. The Gutenberg Health Study sample contains more myopes than other study cohorts in adult populations. Our findings do not support the hypothesis of a generally lower prevalence of myopia among adults in Europe as compared with East Asia.
Crowdsourcing for error detection in cortical surface delineations.
Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas; Knudsen, Gitte Moos; Maier-Hein, Lena
2017-01-01
With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.
Medical Errors and Barriers to Reporting in Ten Hospitals in Southern Iran
Khammarnia, Mohammad; Ravangard, Ramin; Barfar, Eshagh; Setoodehzadeh, Fatemeh
2015-01-01
Background: International research shows that medical errors (MEs) are a major threat to patient safety. The present study aimed to describe MEs and barriers to reporting them in Shiraz public hospitals, Iran. Methods: A cross-sectional, retrospective study was conducted in 10 Shiraz public hospitals in the south of Iran, 2013. Using the standardised checklist of Shiraz University of Medical Sciences (referred to the Clinical Governance Department and recorded documentations) and Uribe questionnaire, we gathered the data in the hospitals. Results: A total of 4379 MEs were recorded in 10 hospitals. The highest frequency (27.1%) was related to systematic errors. Besides, most of the errors had occurred in the largest hospital (54.9%), internal wards (36.3%), and morning shifts (55.0%). The results revealed a significant association between the MEs and wards and hospitals (p < 0.001). Moreover, individual and organisational factors were the barriers to reporting ME in the studied hospitals. Also, a significant correlation was observed between the ME reporting barriers and the participants’ job experiences (p < 0.001). Conclusion: The medical errors were highly frequent in the studied hospitals especially in the larger hospitals, morning shift and in the nursing practice. Moreover, individual and organisational factors were considered as the barriers to reporting MEs. PMID:28729811
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil
2014-12-01
Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Chromosomal instability drives metastasis through a cytosolic DNA response.
Bakhoum, Samuel F; Ngo, Bryan; Laughney, Ashley M; Cavallo, Julie-Ann; Murphy, Charles J; Ly, Peter; Shah, Pragya; Sriram, Roshan K; Watkins, Thomas B K; Taunk, Neil K; Duran, Mercedes; Pauli, Chantal; Shaw, Christine; Chadalavada, Kalyani; Rajasekhar, Vinagolu K; Genovese, Giulio; Venkatesan, Subramanian; Birkbak, Nicolai J; McGranahan, Nicholas; Lundquist, Mark; LaPlant, Quincey; Healey, John H; Elemento, Olivier; Chung, Christine H; Lee, Nancy Y; Imielenski, Marcin; Nanjangud, Gouri; Pe'er, Dana; Cleveland, Don W; Powell, Simon N; Lammerding, Jan; Swanton, Charles; Cantley, Lewis C
2018-01-25
Chromosomal instability is a hallmark of cancer that results from ongoing errors in chromosome segregation during mitosis. Although chromosomal instability is a major driver of tumour evolution, its role in metastasis has not been established. Here we show that chromosomal instability promotes metastasis by sustaining a tumour cell-autonomous response to cytosolic DNA. Errors in chromosome segregation create a preponderance of micronuclei whose rupture spills genomic DNA into the cytosol. This leads to the activation of the cGAS-STING (cyclic GMP-AMP synthase-stimulator of interferon genes) cytosolic DNA-sensing pathway and downstream noncanonical NF-κB signalling. Genetic suppression of chromosomal instability markedly delays metastasis even in highly aneuploid tumour models, whereas continuous chromosome segregation errors promote cellular invasion and metastasis in a STING-dependent manner. By subverting lethal epithelial responses to cytosolic DNA, chromosomally unstable tumour cells co-opt chronic activation of innate immune pathways to spread to distant organs.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Quantifying Ab Initio Equation of State Errors for Hydrogen-Helium Mixtures
NASA Astrophysics Data System (ADS)
Clay, Raymond; Morales, Miguel
2017-06-01
In order to produce predictive models of Jovian planets, an accurate equation of state for hydrogen-helium mixtures is needed over pressure and temperature ranges spanning multiple orders of magnitude. While extensive theoretical work has been done in this area, previous controversies regarding the equation of state of pure hydrogen have demonstrated exceptional sensitivity to approximations commonly employed in ab initio calculations. To this end, we present the results of our quantum Monte Carlo based benchmarking studies for several major classes of density functionals. Additionally, we expand upon our published results by considering the impact that ionic finite size effects and density functional errors translate to errors in the equation of state. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions
NASA Astrophysics Data System (ADS)
Exl, Lukas
2017-12-01
An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.
Cause-and-effect mapping of critical events.
Graves, Krisanne; Simmons, Debora; Galley, Mark D
2010-06-01
Health care errors are routinely reported in the scientific and public press and have become a major concern for most Americans. In learning to identify and analyze errors health care can develop some of the skills of a learning organization, including the concept of systems thinking. Modern experts in improving quality have been working in other high-risk industries since the 1920s making structured organizational changes through various frameworks for quality methods including continuous quality improvement and total quality management. When using these tools, it is important to understand systems thinking and the concept of processes within organization. Within these frameworks of improvement, several tools can be used in the analysis of errors. This article introduces a robust tool with a broad analytical view consistent with systems thinking, called CauseMapping (ThinkReliability, Houston, TX, USA), which can be used to systematically analyze the process and the problem at the same time. Copyright 2010 Elsevier Inc. All rights reserved.
Use of streamflow data to estimate base flowground-water recharge for Wisconsin
Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.
2007-01-01
The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Wind tunnel seeding particles for laser velocimeter
NASA Technical Reports Server (NTRS)
Ghorieshi, Anthony
1992-01-01
The design of an optimal air foil has been a major challenge for aerospace industries. The main objective is to reduce the drag force while increasing the lift force in various environmental air conditions. Experimental verification of theoretical and computational results is a crucial part of the analysis because of errors buried in the solutions, due to the assumptions made in theoretical work. Experimental studies are an integral part of a good design procedure; however, empirical data are not always error free due to environmental obstacles or poor execution, etc. The reduction of errors in empirical data is a major challenge in wind tunnel testing. One of the recent advances of particular interest is the use of a non-intrusive measurement technique known as laser velocimetry (LV) which allows for obtaining quantitative flow data without introducing flow disturbing probes. The laser velocimeter technique is based on measurement of scattered light by the particles present in the flow but not the velocity of the flow. Therefore, for an accurate flow velocity measurement with laser velocimeters, two criterion are investigated: (1) how well the particles track the local flow field, and (2) the requirement of light scattering efficiency to obtain signals with the LV. In order to demonstrate the concept of predicting the flow velocity by velocity measurement of particle seeding, the theoretical velocity of the gas flow is computed and compared with experimentally obtained velocity of particle seeding.
Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty
NASA Astrophysics Data System (ADS)
Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.
2013-12-01
Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.
Bolton-Maggs, P H B
2016-12-01
The Annual SHOT Report for incidents reported in 2015 was published on 7 July at the SHOT symposium. Once again, the majority of reports (77·7%) were associated with mistakes ('human factors'). Pressures and stress in the hospital environment contributed to several error reports. There were 26 deaths where transfusion played a part, one due to haemolysis from anti-Wr a (units issued electronically). The incidence of haemolysis due to this antibody has increased in recent years. Transfusion-associated circulatory overload is the most common contributor to death and major morbidity. Reports of delays to transfusion have increased, some caused by the failure of correct patient identification. There were seven ABO-incompatible red cell transfusions (one death) with an additional six to allogeneic stem cell transplant recipients. Near-miss reporting and analysis is useful and demonstrated nearly 300 instances of wrong blood in tube, which could have resulted in ABO-incompatible transfusion had the error not been detected. Errors with anti-D immunoglobulin continue, and preliminary data from the new survey of new anti-D found in pregnancy has shown that sensitisation occurs in some women even with apparently 'ideal' care. For the first time, the SHOT report now incorporates a chapter on donor events. © 2016 British Blood Transfusion Society.
In vitro susceptibility of Pseudomonas species to carbenicillin and trimethoprim-sulfamethoxazole.
Hill, S F; Haldane, D J; Ngui-Yen, J H; Smith, J A
1985-01-01
We compared susceptibility tests of 47 Pseudomonas aeruginosa isolates and 40 Pseudomonas species to carbenicillin and trimethoprim-sulfamethoxazole by the MS-2 and Sceptor systems and agar dilution. The major and very major errors encountered in these tests in the MS-2 and Sceptor systems raise doubts about the accuracy of these methods for testing P. aeruginosa and confirm that they should not be used for testing the susceptibility of Pseudomonas species to the two drugs tested. PMID:3930567
ENDF/B-IV fission-product files: summary of major nuclide data
DOE Office of Scientific and Technical Information (OSTI.GOV)
England, T.R.; Schenter, R.E.
1975-09-01
The major fission-product parameters [sigma/sub th/, RI, tau/sub 1/2/, E- bar/sub $beta$/, E-bar/sub $gamma$/, E-bar/sub $alpha$/, decay and (n,$gamma$) branching, Q, and AWR] abstracted from ENDF/B-IV files for 824 nuclides are summarized. These data are most often requested by users concerned with reactor design, reactor safety, dose, and other sundry studies. The few known file errors are corrected to date. Tabular data are listed by increasing mass number. (auth)
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development
McCoy, Rajiv C.; Demko, Zachary P.; Ryan, Allison; Banjevic, Milena; Hill, Matthew; Sigurjonsson, Styrmir; Rabinowitz, Matthew; Petrov, Dmitri A.
2015-01-01
Whole-chromosome imbalances affect over half of early human embryos and are the leading cause of pregnancy loss. While these errors frequently arise in oocyte meiosis, many such whole-chromosome abnormalities affecting cleavage-stage embryos are the result of chromosome missegregation occurring during the initial mitotic cell divisions. The first wave of zygotic genome activation at the 4–8 cell stage results in the arrest of a large proportion of embryos, the vast majority of which contain whole-chromosome abnormalities. Thus, the full spectrum of meiotic and mitotic errors can only be detected by sampling after the initial cell divisions, but prior to this selective filter. Here, we apply 24-chromosome preimplantation genetic screening (PGS) to 28,052 single-cell day-3 blastomere biopsies and 18,387 multi-cell day-5 trophectoderm biopsies from 6,366 in vitro fertilization (IVF) cycles. We precisely characterize the rates and patterns of whole-chromosome abnormalities at each developmental stage and distinguish errors of meiotic and mitotic origin without embryo disaggregation, based on informative chromosomal signatures. We show that mitotic errors frequently involve multiple chromosome losses that are not biased toward maternal or paternal homologs. This outcome is characteristic of spindle abnormalities and chaotic cell division detected in previous studies. In contrast to meiotic errors, our data also show that mitotic errors are not significantly associated with maternal age. PGS patients referred due to previous IVF failure had elevated rates of mitotic error, while patients referred due to recurrent pregnancy loss had elevated rates of meiotic error, controlling for maternal age. These results support the conclusion that mitotic error is the predominant mechanism contributing to pregnancy losses occurring prior to blastocyst formation. This high-resolution view of the full spectrum of whole-chromosome abnormalities affecting early embryos provides insight into the cytogenetic mechanisms underlying their formation and the consequences for human fertility. PMID:26491874
Climbing fibers predict movement kinematics and performance errors.
Streng, Martha L; Popa, Laurentiu S; Ebner, Timothy J
2017-09-01
Requisite for understanding cerebellar function is a complete characterization of the signals provided by complex spike (CS) discharge of Purkinje cells, the output neurons of the cerebellar cortex. Numerous studies have provided insights into CS function, with the most predominant view being that they are evoked by error events. However, several reports suggest that CSs encode other aspects of movements and do not always respond to errors or unexpected perturbations. Here, we evaluated CS firing during a pseudo-random manual tracking task in the monkey ( Macaca mulatta ). This task provides extensive coverage of the work space and relative independence of movement parameters, delivering a robust data set to assess the signals that activate climbing fibers. Using reverse correlation, we determined feedforward and feedback CSs firing probability maps with position, velocity, and acceleration, as well as position error, a measure of tracking performance. The direction and magnitude of the CS modulation were quantified using linear regression analysis. The major findings are that CSs significantly encode all three kinematic parameters and position error, with acceleration modulation particularly common. The modulation is not related to "events," either for position error or kinematics. Instead, CSs are spatially tuned and provide a linear representation of each parameter evaluated. The CS modulation is largely predictive. Similar analyses show that the simple spike firing is modulated by the same parameters as the CSs. Therefore, CSs carry a broader array of signals than previously described and argue for climbing fiber input having a prominent role in online motor control. NEW & NOTEWORTHY This article demonstrates that complex spike (CS) discharge of cerebellar Purkinje cells encodes multiple parameters of movement, including motor errors and kinematics. The CS firing is not driven by error or kinematic events; instead it provides a linear representation of each parameter. In contrast with the view that CSs carry feedback signals, the CSs are predominantly predictive of upcoming position errors and kinematics. Therefore, climbing fibers carry multiple and predictive signals for online motor control. Copyright © 2017 the American Physiological Society.
How to minimize perceptual error and maximize expertise in medical imaging
NASA Astrophysics Data System (ADS)
Kundel, Harold L.
2007-03-01
Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual learning in the development of expertise is also considered.
Abdullah, Ayesha S; Jadoon, Milhammad Zahid; Akram, Mohammad; Awan, Zahid Hussain; Azam, Mohammad; Safdar, Mohammad; Nigar, Mohammad
2015-01-01
Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of <6/6) further examination was done to establish the cause for the subnormal vision. A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorpora'ted into the health care delivery system.
Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.
Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep
2016-04-01
Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Evaluation of dynamic electromagnetic tracking deviation
NASA Astrophysics Data System (ADS)
Hummel, Johann; Figl, Michael; Bax, Michael; Shahidi, Ramin; Bergmann, Helmar; Birkfellner, Wolfgang
2009-02-01
Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.
Global modeling of land water and energy balances. Part I: The land dynamics (LaD) model
Milly, P.C.D.; Shmakin, A.B.
2002-01-01
A simple model of large-scale land (continental) water and energy balances is presented. The model is an extension of an earlier scheme with a record of successful application in climate modeling. The most important changes from the original model include 1) introduction of non-water-stressed stomatal control of transpiration, in order to correct a tendency toward excessive evaporation: 2) conversion from globally constant parameters (with the exception of vegetation-dependent snow-free surface albedo) to more complete vegetation and soil dependence of all parameters, in order to provide more realistic representation of geographic variations in water and energy balances and to enable model-based investigations of land-cover change; 3) introduction of soil sensible heat storage and transport, in order to move toward realistic diurnal-cycle modeling; 4) a groundwater (saturated-zone) storage reservoir, in order to provide more realistic temporal variability of runoff; and 5) a rudimentary runoff-routing scheme for delivery of runoff to the ocean, in order to provide realistic freshwater forcing of the ocean general circulation model component of a global climate model. The new model is tested with forcing from the International Satellite Land Surface Climatology Project Initiative I global dataset and a recently produced observation-based water-balance dataset for major river basins of the world. Model performance is evaluated by comparing computed and observed runoff ratios from many major river basins of the world. Special attention is given to distinguishing between two components of the apparent runoff ratio error: the part due to intrinsic model error and the part due to errors in the assumed precipitation forcing. The pattern of discrepancies between modeled and observed runoff ratios is consistent with results from a companion study of precipitation estimation errors. The new model is tuned by adjustment of a globally constant scale factor for non-water-stressed stomatal resistance. After tuning, significant overestimation of runoff is found in environments where an overall arid climate includes a brief but intense wet season. It is shown that this error may be explained by the neglect of upward soil water diffusion from below the root zone during the dry season. With the exception of such basins, and in the absence of precipitation errors. It is estimated that annual runoff ratios simulated by the model would have a root-mean-square error of about 0.05. The new model matches observations better than its predecessor, which has a negative runoff bias and greater scatter.
Joyanes, Providencia; del Carmen Conejo, María; Martínez-Martínez, Luis; Perea, Evelio J.
2001-01-01
VITEK 2 is a new automatic system for the identification and susceptibility testing of the most clinically important bacteria. In the present study 198 clinical isolates, including Pseudomonas aeruginosa (n = 146), Acinetobacter baumannii (n = 25), and Stenotrophomonas maltophilia (n = 27) were evaluated. Reference susceptibility testing of cefepime, cefotaxime, ceftazidime, ciprofloxacin, gentamicin, imipenem, meropenem, piperacillin, tobramycin, levofloxacin (only for P. aeruginosa), co-trimoxazole (only for S. maltophilia), and ampicillin-sulbactam and tetracycline (only for A. baumannii) was performed by microdilution (NCCLS guidelines). The VITEK 2 system correctly identified 91.6, 100, and 76% of P. aeruginosa, S. maltophilia, and A. baumannii isolates, respectively, within 3 h. The respective percentages of essential agreement (to within 1 twofold dilution) for P. aeruginosa and A. baumannii were 89.0 and 88.0% (cefepime), 91.1 and 100% (cefotaxime), 95.2 and 96.0% (ceftazidime), 98.6 and 100% (ciprofloxacin), 88.4 and 100% (gentamicin), 87.0 and 92.0% (imipenem), 85.0 and 88.0% (meropenem), 84.2 and 96.0% (piperacillin), and 97.3 and 80% (tobramycin). The essential agreement for levofloxacin against P. aeruginosa was 86.3%. The percentages of essential agreement for ampicillin-sulbactam and tetracycline against A. baumannii were 88.0 and 100%, respectively. Very major errors for P. aeruginosa (resistant by the reference method, susceptible with the VITEK 2 system [resistant to susceptible]) were noted for cefepime (0.7%), cefotaxime (0.7%), gentamicin (0.7%), imipenem (1.4%), levofloxacin (2.7%), and piperacillin (2.7%) and, for one strain of A. baumannii, for imipenem. Major errors (susceptible to resistant) were noted only for P. aeruginosa and cefepime (2.0%), ceftazidime (0.7%), and piperacillin (3.4%). Minor errors ranged from 0.0% for piperacillin to 22.6% for cefotaxime against P. aeruginosa and from 0.0% for piperacillin and ciprofloxacin to 20.0% for cefepime against A. baumannii. The VITEK 2 system provided co-trimoxazole MICs only for S. maltophilia; no very major or major errors were obtained for co-trimoxazole against this species. It is concluded that the VITEK 2 system allows the rapid identification of S. maltophilia and most P. aeruginosa and A. baumannii isolates. The VITEK 2 system can perform reliable susceptibility testing of many of the antimicrobial agents used against P. aeruginosa and A. baumannii. It would be desirable if new versions of the VITEK 2 software were able to determine MICs and the corresponding clinical categories of agents active against S. maltophilia. PMID:11526158
NASA Astrophysics Data System (ADS)
Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.
2013-06-01
The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Which non-technical skills do junior doctors require to prescribe safely? A systematic review.
Dearden, Effie; Mellanby, Edward; Cameron, Helen; Harden, Jeni
2015-12-01
Prescribing errors are a major source of avoidable morbidity and mortality. Junior doctors write most in-hospital prescriptions and are the least experienced members of the healthcare team. This puts them at high risk of error and makes them attractive targets for interventions to improve prescription safety. Error analysis has shown a background of complex environments with multiple contributory conditions. Similar conditions in other high risk industries, such as aviation, have led to an increased understanding of so-called human factors and the use of non-technical skills (NTS) training to try to reduce error. To date no research has examined the NTS required for safe prescribing. The aim of this review was to develop a prototype NTS taxonomy for safe prescribing, by junior doctors, in hospital settings. A systematic search identified 14 studies analyzing prescribing behaviours and errors by junior doctors. Framework analysis was used to extract data from the studies and identify behaviours related to categories of NTS that might be relevant to safe and effective prescribing performance by junior doctors. Categories were derived from existing literature and inductively from the data. A prototype taxonomy of relevant categories (situational awareness, decision making, communication and team working, and task management) and elements was constructed. This prototype will form the basis of future work to create a tool that can be used for training and assessment of medical students and junior doctors to reduce prescribing error in the future. © 2015 The British Pharmacological Society.
Errorless Learning in Cognitive Rehabilitation: A Critical Review
Middleton, Erica L.; Schwartz, Myrna F.
2012-01-01
Cognitive rehabilitation research is increasingly exploring errorless learning interventions, which prioritize the avoidance of errors during treatment. The errorless learning approach was originally developed for patients with severe anterograde amnesia, who were deemed to be at particular risk for error learning. Errorless learning has since been investigated in other memory-impaired populations (e.g., Alzheimer's disease) and acquired aphasia. In typical errorless training, target information is presented to the participant for study or immediate reproduction, a method that prevents participants from attempting to retrieve target information from long-term memory (i.e., retrieval practice). However, assuring error elimination by preventing difficult (and error-permitting) retrieval practice is a potential major drawback of the errorless approach. This review begins with discussion of research in the psychology of learning and memory that demonstrates the importance of difficult (and potentially errorful) retrieval practice for robust learning and prolonged performance gains. We then review treatment research comparing errorless and errorful methods in amnesia and aphasia, where only the latter provides (difficult) retrieval practice opportunities. In each clinical domain we find the advantage of the errorless approach is limited and may be offset by the therapeutic potential of retrieval practice. Gaps in current knowledge are identified that preclude strong conclusions regarding a preference for errorless treatments over methods that prioritize difficult retrieval practice. We offer recommendations for future research aimed at a strong test of errorless learning treatments, which involves direct comparison with methods where retrieval practice effects are maximized for long-term gains. PMID:22247957
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time
NASA Technical Reports Server (NTRS)
Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John
2008-01-01
Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
'Systemic Failures' and 'Human Error' in Canadian TSB Aviation Reports Between 1996 and 2002
NASA Technical Reports Server (NTRS)
Holloway, C. M.; Johnson, C. W.
2004-01-01
This paper describes the results of an independent analysis of the primary and contributory causes of aviation accidents in Canada between 1996 and 2003. The purpose of the study was to assess the comparative frequency of a range of causal factors in the reporting of these adverse events. Our results suggest that the majority of these high consequence accidents were attributed to human error. A large number of reports also mentioned wider systemic issues, including the managerial and regulatory context of aviation operations. These issues are more likely to appear as contributory rather than primary causes in this set of accident reports.
A biomimetic algorithm for the improved detection of microarray features
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.
2007-02-01
One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.
Detailed Uncertainty Analysis for Ares I Ascent Aerodynamics Wind Tunnel Database
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Hanke, Jeremy L.; Walker, Eric L.; Houlden, Heather P.
2008-01-01
A detailed uncertainty analysis for the Ares I ascent aero 6-DOF wind tunnel database is described. While the database itself is determined using only the test results for the latest configuration, the data used for the uncertainty analysis comes from four tests on two different configurations at the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. Four major error sources are considered: (1) systematic errors from the balance calibration curve fits and model + balance installation, (2) run-to-run repeatability, (3) boundary-layer transition fixing, and (4) tunnel-to-tunnel reproducibility.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Navigation Guidelines for Orbital Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2003-01-01
Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member's semi-major axis are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or vice versa. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.
NASA Technical Reports Server (NTRS)
Evans, F. A.
1978-01-01
Space shuttle orbiter/IUS alignment transfer was evaluated. Although the orbiter alignment accuracy was originally believed to be the major contributor to the overall alignment transfer error, it was shown that orbiter alignment accuracy is not a factor affecting IUS alignment accuracy, if certain procedures are followed. Results are reported of alignment transfer accuracy analysis.
A General Chemistry Demonstration: Student Observations and Explanations.
ERIC Educational Resources Information Center
Silberman, Robert G.
1983-01-01
Out of 70 answers to questions concerning the chemistry involved in an "orange tornado" demonstration, only 10 were partially correct, others totally wrong or showing major errors in understanding, comprehension, and/or reasoning. Demonstration and reactions involved, selected incorrect answers, and a substantially correct answer are discussed.…
Improved Techniques for Video Compression and Communication
ERIC Educational Resources Information Center
Chen, Haoming
2016-01-01
Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…
40 CFR 1066.235 - Speed verification procedure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... before testing, and after major maintenance. (c) Procedure. Use one of the following procedures to verify... dynamometer control circuits. Determine the speed error as follows: (i) Set the dynamometer to speed-control mode. Set the dynamometer speed to a value between 4.2 m/s and the maximum speed expected during...
Resolving Ethical Disputes Through Arbitration: An Alternative to Code Penalties.
ERIC Educational Resources Information Center
Barwis, Gail Lund
Arbitration cases involving journalism ethics can be grouped into three major categories: outside activities that lead to conflicts of interest, acceptance of gifts that compromise journalistic objectivity, and writing false or misleading information or failing to check facts or correct errors. In most instances, failure to adhere to ethical…
ERIC Educational Resources Information Center
Edwards, Gregory
2011-01-01
Security incidents resulting from human error or subversive actions have caused major financial losses, reduced business productivity or efficiency, and threatened national security. Some research suggests that information system security frameworks lack emphasis on human involvement as a significant cause for security problems in a rapidly…
Math Mistakes That Make the News
ERIC Educational Resources Information Center
Lewis, Heather A.
2015-01-01
Teachers often promote care in doing calculations, but for most students a single mistake rarely has major consequences. This article presents several real-life events in which relatively minor mathematical errors led to situations that ranged from public embarrassment to the loss of millions of dollars' worth of equipment. The stories here…
DIAGNOSTIC STUDY ON FINE PARTICULATE MATTER PREDICTIONS OF CMAQ IN THE SOUTHEASTERN U.S.
In this study, the authors use the process analysis tool embedded in CMAQ to examine major processes that govern the fate of key pollutants, identify the most influential processes that contribute to model errors, and guide the diagnostic and sensitivity studies aimed at improvin...
Infrared thermometry for deficit irrigation of peach trees
USDA-ARS?s Scientific Manuscript database
Water shortage has been a major concern for crop production in the western states of the USA and other arid regions in the world. Deficit irrigation can be used in some cropping systems as a potential water saving strategy to alleviate water shortage, however, the margin of error in irrigation manag...
Sleep Patterns and Its Relationship to Schooling and Family.
ERIC Educational Resources Information Center
Jones, Franklin Ross
Diagnostic classifications of sleep and arousal disorders have been categorized in four major areas: disorders of initiating and maintaining sleep, disorders of excessive sleepiness, disorders of the sleep/wake pattern, and the parasomnias such as sleep walking, talking, and night errors. Another nomenclature classifies them into DIMS (disorders…
Training for Community Development Personnel in India.
ERIC Educational Resources Information Center
Makhija, H. R.
The book traces the development of training schemes in India for community development workers. It is divided into four parts which deal with: origin and growth of the Community Development Training Programme; problems encountered and the process of solutions through trial and error; major reorganization of the initial program and the research…
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
ERIC Educational Resources Information Center
Sylwester, Robert
1994-01-01
Studies show our emotional system is a complex, widely distributed, and error-prone system that defines our basic personality early in life and is quite resistant to change. This article describes our emotional system's major parts (the peptides that carry emotional information and the body and brain structures that activate and regulate emotions)…
Effect of satellite formations and imaging modes on global albedo estimation
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.
2016-05-01
We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.
Application of human reliability analysis to nursing errors in hospitals.
Inoue, Kayoko; Koizumi, Akio
2004-12-01
Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).
Error, stress, and teamwork in medicine and aviation: cross sectional surveys
NASA Technical Reports Server (NTRS)
Sexton, J. B.; Thomas, E. J.; Helmreich, R. L.
2000-01-01
OBJECTIVES: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. DESIGN:: Cross sectional surveys. SETTING:: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. PARTICIPANTS:: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). MAIN OUTCOME MEASURES:: Perceptions of error, stress, and teamwork. RESULTS:: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. CONCLUSIONS: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Souvestre, P A; Landrock, C K; Blaber, A P
2008-08-01
Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.
Prevalence of refractive errors in children in India: a systematic review.
Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa
2018-04-22
Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.
The prevalence of uncorrected refractive errors in underserved rural areas.
Hashemi, Hassan; Abbastabar, Hedayat; Yekta, Abbasali; Heydarian, Samira; Khabazkhoob, Mehdi
2017-12-01
To determine the prevalence of uncorrected refractive errors, need for spectacles, and the determinants of unmet need in underserved rural areas of Iran. In a cross-sectional study, multistage cluster sampling was done in 2 underserved rural areas of Iran. Then, all subjects underwent vision testing and ophthalmic examinations including the measurement of uncorrected visual acuity (UCVA), best corrected visual acuity, visual acuity with current spectacles, auto-refraction, retinoscopy, and subjective refraction. Need for spectacles was defined as UCVA worse than 20/40 in the better eye that could be corrected to better than 20/40 with suitable spectacles. Of the 3851 selected individuals, 3314 participated in the study. Among participants, 18.94% [95% confidence intervals (CI): 13.48-24.39] needed spectacles and 11.23% (95% CI: 7.57-14.89) had an unmet need. The prevalence of need for spectacles was 46.8% and 23.8% in myopic and hyperopic participants, respectively. The prevalence of unmet need was 27% in myopic, 15.8% in hyperopic, and 25.46% in astigmatic participants. Multiple logistic regression showed that education and type of refractive errors were associated with uncorrected refractive errors; the odds of uncorrected refractive errors were highest in illiterate participants, and the odds of unmet need were 12.13, 5.1, and 4.92 times higher in myopic, hyperopic and astigmatic participants as compared with emmetropic individuals. The prevalence of uncorrected refractive errors was rather high in our study. Since rural areas have less access to health care facilities, special attention to the correction of refractive errors in these areas, especially with inexpensive methods like spectacles, can prevent a major proportion of visual impairment.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Error, stress, and teamwork in medicine and aviation: cross sectional surveys
Sexton, J Bryan; Thomas, Eric J; Helmreich, Robert L
2000-01-01
Objectives: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. Design: Cross sectional surveys. Setting: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. Participants: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). Main outcome measures: Perceptions of error, stress, and teamwork. Results: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. Conclusions: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members. PMID:10720356