Sample records for error rate wer

  1. Reduction in the write error rate of voltage-induced dynamic magnetization switching using the reverse bias method

    NASA Astrophysics Data System (ADS)

    Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji

    2018-04-01

    Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.

  2. Combining multiple thresholding binarization values to improve OCR output

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.

    2013-01-01

    For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.

  3. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power presentmore » in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.« less

  4. Combining approaches to on-line handwriting information retrieval

    NASA Astrophysics Data System (ADS)

    Peña Saldarriaga, Sebastián; Viard-Gaudin, Christian; Morin, Emmanuel

    2010-01-01

    In this work, we propose to combine two quite different approaches for retrieving handwritten documents. Our hypothesis is that different retrieval algorithms should retrieve different sets of documents for the same query. Therefore, significant improvements in retrieval performances can be expected. The first approach is based on information retrieval techniques carried out on the noisy texts obtained through handwriting recognition, while the second approach is recognition-free using a word spotting algorithm. Results shows that for texts having a word error rate (WER) lower than 23%, the performances obtained with the combined system are close to the performances obtained on clean digital texts. In addition, for poorly recognized texts (WER > 52%), an improvement of nearly 17% can be observed with respect to the best available baseline method.

  5. Retrospective Analysis of Clinical Performance of an Estonian Speech Recognition System for Radiology: Effects of Different Acoustic and Language Models.

    PubMed

    Paats, A; Alumäe, T; Meister, E; Fridolin, I

    2018-04-30

    The aim of this study was to analyze retrospectively the influence of different acoustic and language models in order to determine the most important effects to the clinical performance of an Estonian language-based non-commercial radiology-oriented automatic speech recognition (ASR) system. An ASR system was developed for Estonian language in radiology domain by utilizing open-source software components (Kaldi toolkit, Thrax). The ASR system was trained with the real radiology text reports and dictations collected during development phases. The final version of the ASR system was tested by 11 radiologists who dictated 219 reports in total, in spontaneous manner in a real clinical environment. The audio files collected in the final phase were used to measure the performance of different versions of the ASR system retrospectively. ASR system versions were evaluated by word error rate (WER) for each speaker and modality and by WER difference for the first and the last version of the ASR system. Total average WER for the final version throughout all material was improved from 18.4% of the first version (v1) to 5.8% of the last (v8) version which corresponds to relative improvement of 68.5%. WER improvement was strongly related to modality and radiologist. In summary, the performance of the final ASR system version was close to optimal, delivering similar results to all modalities and being independent on user, the complexity of the radiology reports, user experience, and speech characteristics.

  6. Analysis of Factors Affecting System Performance in the ASpIRE Challenge

    DTIC Science & Technology

    2015-12-13

    performance in the ASpIRE (Automatic Speech recognition In Reverberant Environments) challenge. In particular, overall word error rate (WER) of the solver...systems is analyzed as a function of room, distance between talker and microphone, and microphone type. We also analyze speech activity detection...analysis will inform the design of future challenges and provide insight into the efficacy of current solutions addressing noisy reverberant speech

  7. Dynamic Monitoring of Blood-Brain Barrier Integrity using Water Exchange Index (WEI) During Mannitol and CO2 Challenges in Mouse Brain

    PubMed Central

    Huang, Shuning; Farrar, Christian T.; Dai, Guangping; Kwon, Seon Joo; Bogdanov, Alexei A.; Rosen, Bruce R.; Kim, Young R.

    2012-01-01

    The integrity of the blood-brain barrier (BBB) is critical to normal brain function. Traditional techniques for assessing BBB disruption rely heavily on the spatiotemporal analysis of extravasating contrast agents. But such methods based on the leakage of relatively large molecules are not suitable to detect subtle BBB impairment or to perform repeated measurements in a short time frame. Quantification of the water exchange rate constant (WER) across the BBB using strictly intravascular contrast agents could provide a much more sensitive method for quantifying the BBB integrity. For estimating the WER, we have recently devised a powerful new method using a water exchange index (WEI) biomarker and demonstrated BBB disruption in an acute stroke model. Here we confirm that the WEI is sensitive to even very subtle changes in the integrity of the BBB caused by (1) systemic hypercapnia and (2) low doses of a hyperosmolar solution. In addition, we have examined the sensitivity and accuracy of the WEI as a biomarker of the WER using computer simulation. In particular, the dependence of the WEI-WER relation on changes in vascular blood volume, T1 relaxation of cellular magnetization, and transcytolemmal water exchange was explored. The simulated WEI was found to vary linearly with the WER for typically encountered exchange rate constants (1–4 Hz) regardless of the blood volume. However, for very high WER (>5 Hz) the WEI became progressively more insensitive to increasing WER. The incorporation of transcytolemmal water exchange, using a three-compartment tissue model, helped to extend the linear WEI regime to slightly higher WER, but had no significant effect for most physiologically important water exchange rate constants (WER<4 Hz). Variation in the cellular T1 had no effect on the WEI. Using both theoretical and experimental approaches, our study validates the utility of the WEI biomarker for monitoring BBB integrity. PMID:23055278

  8. Orbiter multiplexer-demultiplexer (MDM)/Space Lab Bus Interface Unit (SL/BIU) serial data interface evaluation, volume 2

    NASA Technical Reports Server (NTRS)

    Tobey, G. L.

    1978-01-01

    Tests were performed to evaluate the operating characteristics of the interface between the Space Lab Bus Interface Unit (SL/BIU) and the Orbiter Multiplexer-Demultiplexer (MDM) serial data input-output (SIO) module. This volume contains the test equipment preparation procedures and a detailed description of the Nova/Input Output Processor Simulator (IOPS) software used during the data transfer tests to determine word error rates (WER).

  9. Dynamic monitoring of blood-brain barrier integrity using water exchange index (WEI) during mannitol and CO2 challenges in mouse brain.

    PubMed

    Huang, Shuning; Farrar, Christian T; Dai, Guangping; Kwon, Seon Joo; Bogdanov, Alexei A; Rosen, Bruce R; Kim, Young R

    2013-04-01

    The integrity of the blood-brain barrier (BBB) is critical to normal brain function. Traditional techniques for the assessment of BBB disruption rely heavily on the spatiotemporal analysis of extravasating contrast agents. However, such methods based on the leakage of relatively large molecules are not suitable for the detection of subtle BBB impairment or for the performance of repeated measurements in a short time frame. Quantification of the water exchange rate constant (WER) across the BBB using strictly intravascular contrast agents could provide a much more sensitive method for the quantification of the BBB integrity. To estimate WER, we have recently devised a powerful new method using a water exchange index (WEI) biomarker and demonstrated BBB disruption in an acute stroke model. Here, we confirm that WEI is sensitive to even very subtle changes in the integrity of the BBB caused by: (i) systemic hypercapnia and (ii) low doses of a hyperosmolar solution. In addition, we have examined the sensitivity and accuracy of WEI as a biomarker of WER using computer simulation. In particular, the dependence of the WEI-WER relation on changes in vascular blood volume, T1 relaxation of cellular magnetization and transcytolemmal water exchange was explored. Simulated WEI was found to vary linearly with WER for typically encountered exchange rate constants (1-4 Hz), regardless of the blood volume. However, for very high WER (>5 Hz), WEI became progressively more insensitive to increasing WER. The incorporation of transcytolemmal water exchange, using a three-compartment tissue model, helped to extend the linear WEI regime to slightly higher WER, but had no significant effect for most physiologically important WERs (WER < 4 Hz). Variation in cellular T1 had no effect on WEI. Using both theoretical and experimental approaches, our study validates the utility of the WEI biomarker for the monitoring of BBB integrity. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Correlation of film density and wet etch rate in hydrofluoric acid of plasma enhanced atomic layer deposited silicon nitride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Provine, J., E-mail: jprovine@stanford.edu; Schindler, Peter; Kim, Yongmin

    2016-06-15

    The continued scaling in transistors and memory elements has necessitated the development of atomic layer deposition (ALD) of silicon nitride (SiN{sub x}), particularly for use a low k dielectric spacer. One of the key material properties needed for SiN{sub x} films is a low wet etch rate (WER) in hydrofluoric (HF) acid. In this work, we report on the evaluation of multiple precursors for plasma enhanced atomic layer deposition (PEALD) of SiN{sub x} and evaluate the film’s WER in 100:1 dilutions of HF in H{sub 2}O. The remote plasma capability available in PEALD, enabled controlling the density of the SiN{submore » x} film. Namely, prolonged plasma exposure made films denser which corresponded to lower WER in a systematic fashion. We determined that there is a strong correlation between WER and the density of the film that extends across multiple precursors, PEALD reactors, and a variety of process conditions. Limiting all steps in the deposition to a maximum temperature of 350 °C, it was shown to be possible to achieve a WER in PEALD SiN{sub x} of 6.1 Å/min, which is similar to WER of SiN{sub x} from LPCVD reactions at 850 °C.« less

  11. Free-Inertial and Damped-Inertial Navigation Mechanization and Error Equations

    DTIC Science & Technology

    1975-04-18

    AD-A014 356 FREE-INERTIAL AND DAMPED-INERTIAL NAVIGATION MECHANIZATION AND ERROR EQUATIONS Warren G. Heller Analytic Sciences Corporation Prepared...IHI IL JI -J THE ANALYTIC SCIENCES CORPORATION TR-312-1-1 FREE-INERTIAL AND DAMPED-INERTIAL NAViGATION MECHANIZATION AND ERROR EQUATIONS Ap~ril 18...PERIOO COVC/REO Fr-,- 1wer l and Dmped-Inertial Navigation Technical Mechanization and Error Equations 8/20-73 - 8/20/74 S. PjLtFORJ4djNjOjO, REPORT

  12. The Work Endurance Recovery Method for Quantifying Training Loads in Judo.

    PubMed

    Morales, Jose; Franchini, Emerson; Garcia-Massó, Xavier; Solana-Tramunt, Mónica; Buscà, Bernat; González, Luis-Millán

    2016-10-01

    To adapt the work endurance recovery (WER) method based on randori maximal time to exhaustion (RMTE) for combat situations in judo. Eleven international-standard judo athletes (7 men and 4 women; mean age 20.73 ± 2.49 y, height 1.72 ± 0.11 m, body mass 67.36 ± 10.67 kg) were recruited to take part in the study. All participants performed a maximal incremental test (MIT), a Wingate test (WIN), a Special Judo Fitness Test (SJFT), and 2 RMTE tests. They then took part in a session at an international training camp in Barcelona, Spain, in which 4 methods of load quantification were implemented: the WER method, the Stagno method, the Lucia method, and the session rating of perceived exertion (RPE session ). RMTE demonstrated a very high test-retest reliability (intraclass correlation coefficient = .91), and correlations of the performance tests ranged from moderate to high: RMTE and MIT (r = .66), RMTE and WIN variables (r = .38-.53), RMTE and SJFT variables (r = .74-.77). The correlation between the WER method, which considers time to exhaustion, and the other systems for quantifying training load was high: WER and RPE session (r = .87), WER and Stagno (r = .77), WER and Lucia (r = .73). A comparative repeated-measures analysis of variance of the normalized values of the quantification did not yield statistically significant differences. The WER method using RMTE is highly adaptable to quantify randori judo sessions and enables one to plan a priori individualized training loads.

  13. Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers

    PubMed Central

    Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng

    2014-01-01

    Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004

  14. Functional Analysis of the Epidermal-Specific MYB Genes CAPRICE and WEREWOLF in Arabidopsis[W

    PubMed Central

    Tominaga, Rumi; Iwata, Mineko; Okada, Kiyotaka; Wada, Takuji

    2007-01-01

    Epidermis cell differentiation in Arabidopsis thaliana is a model system for understanding the developmental end state of plant cells. Two types of MYB transcription factors, R2R3-MYB and R3-MYB, are involved in cell fate determination. To examine the molecular basis of this process, we analyzed the functional relationship of the R2R3-type MYB gene WEREWOLF (WER) and the R3-type MYB gene CAPRICE (CPC). Chimeric constructs made from the R3 MYB regions of WER and CPC used in reciprocal complementation experiments showed that the CPC R3 region cannot functionally substitute for the WER R3 region in the differentiation of hairless cells. However, WER R3 can substantially substitute for CPC R3. There are no differences in yeast interaction assays of WER or WER chimera proteins with GLABRA3 (GL3) or ENHANCER OF GLABRA3 (EGL3). CPC and CPC chimera proteins also have similar activity in preventing GL3 WER and EGL3 WER interactions. Furthermore, we showed by gel mobility shift assays that WER chimera proteins do not bind to the GL2 promoter region. However, a CPC chimera protein, which harbors the WER R3 motif, still binds to the GL2 promoter region. PMID:17644729

  15. Functional analysis of the epidermal-specific MYB genes CAPRICE and WEREWOLF in Arabidopsis.

    PubMed

    Tominaga, Rumi; Iwata, Mineko; Okada, Kiyotaka; Wada, Takuji

    2007-07-01

    Epidermis cell differentiation in Arabidopsis thaliana is a model system for understanding the developmental end state of plant cells. Two types of MYB transcription factors, R2R3-MYB and R3-MYB, are involved in cell fate determination. To examine the molecular basis of this process, we analyzed the functional relationship of the R2R3-type MYB gene WEREWOLF (WER) and the R3-type MYB gene CAPRICE (CPC). Chimeric constructs made from the R3 MYB regions of WER and CPC used in reciprocal complementation experiments showed that the CPC R3 region cannot functionally substitute for the WER R3 region in the differentiation of hairless cells. However, WER R3 can substantially substitute for CPC R3. There are no differences in yeast interaction assays of WER or WER chimera proteins with GLABRA3 (GL3) or ENHANCER OF GLABRA3 (EGL3). CPC and CPC chimera proteins also have similar activity in preventing GL3 WER and EGL3 WER interactions. Furthermore, we showed by gel mobility shift assays that WER chimera proteins do not bind to the GL2 promoter region. However, a CPC chimera protein, which harbors the WER R3 motif, still binds to the GL2 promoter region.

  16. The WEREWOLF MYB protein directly regulates CAPRICE transcription during cell fate specification in the Arabidopsis root epidermis.

    PubMed

    Ryu, Kook Hui; Kang, Yeon Hee; Park, Young-hwan; Hwang, Ildoo; Schiefelbein, John; Lee, Myeong Min

    2005-11-01

    The Arabidopsis root epidermis is composed of two types of cells, hair cells and non-hair cells, and their fate is determined in a position-dependent manner. WEREWOLF (WER), a R2R3 MYB protein, has been shown genetically to function as a master regulator to control both of the epidermal cell fates. To directly test the proposed role of WER in this system, we examined its subcellular localization and defined its transcriptional activation properties. We show that a WER-GFP fusion protein is functional and accumulates in the nucleus of the N-position cells in the Arabidopsis root epidermis, as expected for a transcriptional regulator. We also find that a modified WER protein with a strong activation domain (WER-VP16) promotes the formation of both epidermal cell types, supporting the view that WER specifies both cell fates. In addition, we used the glucocorticoid receptor (GR) inducible system to show that CPC transcription is regulated directly by WER. Using EMSA, we found two WER-binding sites (WBSs; WBSI and WBSII) in the CPC promoter. WER-WBSI binding was confirmed in vivo using the yeast one-hybrid assay. Binding between the WER protein and both WBSs (WBSI and WBSII), and the importance of the two WBSs in CPC promoter activity were confirmed in Arabidopsis. These results provide experimental support for the proposed role of WER as an activator of gene transcription during the specification of both epidermal cell fates.

  17. A speech pronunciation practice system for speech-impaired children: A study to measure its success.

    PubMed

    Salim, Siti Salwah; Mustafa, Mumtaz Begum Binti Peer; Asemi, Adeleh; Ahmad, Azila; Mohamed, Noraini; Ghazali, Kamila Binti

    2016-09-01

    The speech pronunciation practice (SPP) system enables children with speech impairments to practise and improve their speech pronunciation. However, little is known about the surrogate measures of the SPP system. This research aims to measure the success and effectiveness of the SPP system using three surrogate measures: usage (frequency of use), performance (recognition accuracy) and satisfaction (children's subjective reactions), and how these measures are aligned with the success of the SPP system, as well as to each other. We have measured the absolute change in the word error rate (WER) between the pre- and post-training, using the ANOVA test. Correlation co-efficiency (CC) analysis was conducted to test the relation between the surrogate measures, while a Structural Equation Model (SEM) was used to investigate the causal relations between the measures. The CC test results indicate a positive correlation between the surrogate measures. The SEM supports all the proposed gtheses. The ANOVA results indicate that SPP is effective in reducing the WER of impaired speech. The SPP system is an effective assistive tool, especially for high levels of severity. We found that performance is a mediator of the relation between "usage" and "satisfaction". Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. CHRONIC ZINC SCREENING WATER EFFECT RATIO FOR THE H-12 OUTFALL, SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughlin, D.; Looney, B.; Millings, M.

    2009-01-13

    In response to proposed Zn limits for the NPDES outfall H-12, a Zn screening Water Effects Ratio (WER) study was conducted to determine if a full site-specific WER is warranted. Using standard assumptions for relating the lab results to the stream, the screening WER data were consistent with the proposed Zn limit and suggest that a full WER would result in a similar limit. Addition of a humate amendment to the outfall water reduced Zn toxicity, but the toxicity reduction was relatively small and unlikely to impact proposed Zn limits. The screening WER data indicated that the time and expensemore » required to perform a full WER for Zn is not warranted.« less

  19. Cell fate in the Arabidopsis root epidermis is determined by competition between WEREWOLF and CAPRICE.

    PubMed

    Song, Sang-Kee; Ryu, Kook Hui; Kang, Yeon Hee; Song, Jae Hyo; Cho, Young-Hee; Yoo, Sang-Dong; Schiefelbein, John; Lee, Myeong Min

    2011-11-01

    The root hair and nonhair cells in the Arabidopsis (Arabidopsis thaliana) root epidermis are specified by a suite of transcriptional regulators. Two of these are WEREWOLF (WER) and CAPRICE (CPC), which encode MYB transcription factors that are required for promoting the nonhair cell fate and the hair cell fate, respectively. However, the precise function and relationship between these transcriptional regulators have not been fully defined experimentally. Here, we examine these issues by misexpressing the WER gene using the GAL4-upstream activation sequence transactivation system. We find that WER overexpression in the Arabidopsis root tip is sufficient to cause epidermal cells to adopt the nonhair cell fate through direct induction of GLABRA2 (GL2) gene expression. We also show that GLABRA3 (GL3) and ENHANCER OF GLABRA3 (EGL3), two closely related bHLH proteins, are required for the action of the overexpressed WER and that WER interacts with these bHLHs in plant cells. Furthermore, we find that CPC suppresses the WER overexpression phenotype quantitatively. These results show that WER acts together with GL3/EGL3 to induce GL2 expression and that WER and CPC compete with one another to define cell fates in the Arabidopsis root epidermis.

  20. Cell Fate in the Arabidopsis Root Epidermis Is Determined by Competition between WEREWOLF and CAPRICE1[C][W

    PubMed Central

    Song, Sang-Kee; Ryu, Kook Hui; Kang, Yeon Hee; Song, Jae Hyo; Cho, Young-Hee; Yoo, Sang-Dong; Schiefelbein, John; Lee, Myeong Min

    2011-01-01

    The root hair and nonhair cells in the Arabidopsis (Arabidopsis thaliana) root epidermis are specified by a suite of transcriptional regulators. Two of these are WEREWOLF (WER) and CAPRICE (CPC), which encode MYB transcription factors that are required for promoting the nonhair cell fate and the hair cell fate, respectively. However, the precise function and relationship between these transcriptional regulators have not been fully defined experimentally. Here, we examine these issues by misexpressing the WER gene using the GAL4-upstream activation sequence transactivation system. We find that WER overexpression in the Arabidopsis root tip is sufficient to cause epidermal cells to adopt the nonhair cell fate through direct induction of GLABRA2 (GL2) gene expression. We also show that GLABRA3 (GL3) and ENHANCER OF GLABRA3 (EGL3), two closely related bHLH proteins, are required for the action of the overexpressed WER and that WER interacts with these bHLHs in plant cells. Furthermore, we find that CPC suppresses the WER overexpression phenotype quantitatively. These results show that WER acts together with GL3/EGL3 to induce GL2 expression and that WER and CPC compete with one another to define cell fates in the Arabidopsis root epidermis. PMID:21914815

  1. Prediction of Wind Energy Resources (PoWER) Users Guide

    DTIC Science & Technology

    2016-01-01

    ARL-TR-7573● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER) User’s Guide by David P Sauter...not return it to the originator. ARL-TR-7573 ● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER...2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 09/2015–11/2015 4. TITLE AND SUBTITLE Prediction of Wind Energy Resources (PoWER) User’s

  2. Functional divergence of MYB-related genes, WEREWOLF and AtMYB23 in Arabidopsis.

    PubMed

    Tominaga-Wada, Rumi; Nukumizu, Yuka; Sato, Shusei; Kato, Tomohiko; Tabata, Satoshi; Wada, Takuji

    2012-01-01

    Epidermal cell differentiation in Arabidopsis is studied as a model system to understand the mechanisms that determine the developmental end state of plant cells. MYB-related transcription factors are involved in cell fate determination. To examine the molecular basis of this process, we analyzed the functional relationship of two R2R3-type MYB genes, AtMYB23 (MYB23) and WEREWOLF (WER). MYB23 is involved in leaf trichome formation. WER represses root-hair formation. Swapping domains between MYB23 and WER, we found that a low homology region of MYB23 might be involved in ectopic trichome initiation on hypocotyls. MYB23 and all MYB23-WER (MW) chimeric transgenes rescued the increased root-hair phenotype of the wer-1 mutant. Although WER did not rescue the gl1-1 no-trichome phenotype, MYB23 and all MW chimeric transgenes rescued gl1-1. These results suggest that MYB23 acquired a specific function for trichome differentiation during evolution.

  3. Epidermal patterning genes are active during embryogenesis in Arabidopsis.

    PubMed

    Costa, Silvia; Dolan, Liam

    2003-07-01

    Epidermal cells in the root of Arabidopsis seedling differentiate either as hair or non-hair cells, while in the hypocotyl they become either stomatal or elongated cells. WEREWOLF (WER) and GLABRA2 (GL2) are positive regulators of non-hair and elongated cell development. CAPRICE (CPC) is a positive regulator of hair cell development in the root. We show that WER, GL2 and CPC are expressed and active during the stages of embryogenesis when the pattern of cells in the epidermis of the root-hypocotyl axis forms. GL2 is first expressed in the future epidermis in the heart stage embryo and its expression is progressively restricted to those cells that will acquire a non-hair identity in the transition between torpedo and mature stage. The expression of GL2 at the heart stage requires WER function. WER and CPC are transiently expressed throughout the root epidermal layer in the torpedo stage embryo when the cell-specific pattern of GL2 expression is being established in the epidermis. We also show that WER positively regulates CPC transcription and GL2 negatively regulates WER transcription in the mature embryo. We propose that the restriction of GL2 to the future non-hair cells in the root epidermis can be correlated with the activities of WER and CPC during torpedo stage. In the embryonic hypocotyl we show that WER controls GL2 expression. We also provide evidence indicating that CPC may also regulate GL2 expression in the hypocotyl.

  4. Amino acid substitution converts WEREWOLF function from an activator to a repressor of Arabidopsis non-hair cell development.

    PubMed

    Tominaga-Wada, Rumi; Nukumizu, Yuka; Wada, Takuji

    2012-02-01

    Root hair cell or non-hair cell fate determination in Arabidopsis thaliana root epidermis is model system for plant cell development. Two types of MYB transcription factors, the R2R3-type MYB, WEREWOLF (WER), and an R3-type MYB, CAPRICE (CPC), are involved in this cell fate determination process. To study the molecular basis of this process, we analyzed the functional relationship of WER and CPC. WER-CPC chimeric constructs were made from WER where all or parts of the MYB R3 region were replaced with the corresponding regions from CPC R3, and the constructs were introduced into the cpc-2 mutant. Although, the WER gene did not rescue the cpc-2 mutant 'small number of root hairs' phenotype, the WER-CPC chimera with two amino acids substitution (WC6) completely rescued the cpc-2 mutant phenotype. Furthermore, the WER-CPC chimera with 37 amino acids substitution (WC5) excessively rescued the cpc-2 mutant and induced 2.5 times more root hairs than wild-type. Consistent with this phenotype, GL2 gene expression was strongly reduced in WC5 in a cpc-2 background. Our results suggest that swapping at least two amino acids is sufficient to convert WER to CPC function. Therefore, these key residues may have strongly contributed to the selection of these important functions over evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Performance evaluation of a newly developed variable rate sprayer for nursery liner applications

    USDA-ARS?s Scientific Manuscript database

    An experimental variable-rate sprayer designed for liner applications was tested by comparing its spray deposit, coverage, and droplet density inside canopies of six nursery liner varieties with constant-rate applications. Spray samplers, including water sensitive papers (WSP) and nylon screens, wer...

  6. How well does multiple OCR error correction generalize?

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  7. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  8. WEREWOLF, a Regulator of Root Hair Pattern Formation, Controls Flowering Time through the Regulation of FT mRNA Stability1[C][W][OA

    PubMed Central

    Seo, Eunjoo; Yu, Jihyeon; Ryu, Kook Hui; Lee, Myeong Min; Lee, Ilha

    2011-01-01

    A key floral activator, FT, integrates stimuli from long-day, vernalization, and autonomous pathways and triggers flowering by directly regulating floral meristem identity genes in Arabidopsis (Arabidopsis thaliana). Since a small amount of FT transcript is sufficient for flowering, the FT level is strictly regulated by diverse genes. In this study, we show that WEREWOLF (WER), a MYB transcription factor regulating root hair pattern, is another regulator of FT. The mutant wer flowers late in long days but normal in short days and shows a weak sensitivity to vernalization, which indicates that WER controls flowering time through the photoperiod pathway. The expression and double mutant analyses showed that WER modulates FT transcript level independent of CONSTANS and FLOWERING LOCUS C. The histological analysis of WER shows that it is expressed in the epidermis of leaves, where FT is not expressed. Consistently, WER regulates not the transcription but the stability of FT mRNA. Our results reveal a novel regulatory mechanism of FT that is non cell autonomous. PMID:21653190

  9. WEREWOLF, a regulator of root hair pattern formation, controls flowering time through the regulation of FT mRNA stability.

    PubMed

    Seo, Eunjoo; Yu, Jihyeon; Ryu, Kook Hui; Lee, Myeong Min; Lee, Ilha

    2011-08-01

    A key floral activator, FT, integrates stimuli from long-day, vernalization, and autonomous pathways and triggers flowering by directly regulating floral meristem identity genes in Arabidopsis (Arabidopsis thaliana). Since a small amount of FT transcript is sufficient for flowering, the FT level is strictly regulated by diverse genes. In this study, we show that WEREWOLF (WER), a MYB transcription factor regulating root hair pattern, is another regulator of FT. The mutant wer flowers late in long days but normal in short days and shows a weak sensitivity to vernalization, which indicates that WER controls flowering time through the photoperiod pathway. The expression and double mutant analyses showed that WER modulates FT transcript level independent of CONSTANS and FLOWERING LOCUS C. The histological analysis of WER shows that it is expressed in the epidermis of leaves, where FT is not expressed. Consistently, WER regulates not the transcription but the stability of FT mRNA. Our results reveal a novel regulatory mechanism of FT that is non cell autonomous.

  10. Radiation studies of optical and electronic components used in astronomical satellite studies

    NASA Technical Reports Server (NTRS)

    Becher, J.; Kernell, R. L.

    1981-01-01

    The synchronous orbit of the IUE carries the satellite through Earth's outer electron belt. A 40 mCi Sr90 source was used to simulate these electrons. A 5 mCi source of Co60 was used to simulate bremmstrahlung. A 10 MeV electron Linac and a 1.7 MeV electron Van de Graaf wer used to investigate the energy dependence of radiation effects and to perform radiations at a high flux rate. A 100 MeV proton cyclotron was used to simulate cosmic rays. Results are presented for three instrument systems of the IUE and measurements for specific components are reported. The three instrument systems were the ultraviolet converter, the fine error sensor (FES), and the SEC vidicon camera tube. The components were optical glasses, electronic components, silicon photodiodes, and UV window materials.

  11. Parametric study of helicopter aircraft systems costs and weights

    NASA Technical Reports Server (NTRS)

    Beltramo, M. N.

    1980-01-01

    Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.

  12. Allgöwer-Donati Versus Vertical Mattress Suture Technique Impact on Perfusion in Ankle Fracture Surgery: A Randomized Clinical Trial Using Intraoperative Angiography.

    PubMed

    Shannon, Steven F; Houdek, Matthew T; Wyles, Cody C; Yuan, Brandon J; Cross, William W; Cass, Joseph R; Sems, Stephen A

    2017-02-01

    The purpose of this study was to evaluate which primary wound closure technique for ankle fractures affords the most robust perfusion as measured by laser-assisted indocyanine green angiography: Allgöwer-Donati or vertical mattress. Prospective, randomized. Level 1 Academic Trauma Center. Thirty patients undergoing open reduction internal fixation for ankle fractures were prospectively randomized to Allgöwer-Donati (n = 15) or vertical mattress (n = 15) closure. Demographics were similar for both cohorts with respect to age, sex, body mass index, surgical timing, and OTA/AO fracture classification. Skin perfusion (mean incision perfusion and mean perfusion impairment) was quantified in fluorescence units with laser-assisted indocyanine green angiography along the lateral incision as well as anterior and posterior to the incision at 30 separate locations. Minimum follow-up was 3 months with a mean follow-up 4.7 months. Allgöwer-Donati enabled superior perfusion compared with the vertical mattress suture technique. Mean incision perfusion for Allgöwer-Donati was 51 (SD = 13) and for vertical mattress was 28 (SD = 10, P < 0.0001). Mean perfusion impairment was less in the Allgöwer-Donati cohort (12.8, SD = 9) compared with that in the vertical mattress cohort (23.4, SD = 14; P = 0.03). One patient in each cohort experienced a wound complication. The Allgöwer-Donati suture technique offers improved incision perfusion compared with vertical mattress closure after open reduction internal fixation of ankle fractures. Theoretically, this may enhance soft tissue healing and decrease the risk of wound complications. Surgeons may take this into consideration when deciding closure techniques for ankle fractures. Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence.

  13. Wind erosion risk in the southwest of Buenos Aires Province, Argentina, and its relationship to the productivity index

    NASA Astrophysics Data System (ADS)

    Silenzi, Juan C.; Echeverría, Nora E.; Vallejos, Adrián G.; Bouza, Mariana E.; De Lucia, Martín P.

    2012-01-01

    Wind erosion risk (WER) for soils of each municipality in the southwest (SW) of Buenos Aires Province (10,491,172 ha) was determined using the wind erosion equation (WEQ) model. WER results from multiplying the soil erodibility index (" I") of the soil by the climatic factor ( C). WER (Mg ha -1 year -1) of each municipality was: Bahía Blanca: 22.4, Coronel Dorrego: 18.6, Coronel Pringles: 4.5, Coronel Rosales: 48.2, Coronel Suárez: 4.5, Guaminí: 3.0, Patagones: 104.6, Puan: 12.2, Saavedra: 3.0, Tornquist: 6.8, and Villarino: 31.7. The maximum weighted average of " I" (Mg ha -1 year -1) corresponded to Coronel Rosales (87.6), Patagones (87.2), Villarino (85.7), Puan (67.9); Guaminí (59.6), Coronel Dorrego (53.1), and Bahía Blanca (39.3); the remaining municipalities ranged between 34.9 and 32.1 Mg ha -1 year -1. The highest C (%) corresponded to Patagones (120), Bahía Blanca (57), Coronel Rosales (55), Villarino (37), Coronel Dorrego (35), Tornquist (21), and Puan (18); for the remaining municipalities it was 14%. The productivity index (PI) is known to establish a numerical value of the productive capacity of lands. The relationship between WER and PI, weighted averages, in all the studied municipalities was fitted by means of a linear model, WER (Mg ha -1 year -1) = 95.23 - 2.09 * PI (%) ( R2 = 66%), and a second-order polynomial model, WER (Mg ha -1 year -1) = 139.41 - 5.86 * PI (%) + 0.07 * PI 2 (%) ( R2 = 74%). No statistically significant relationship was found between WER and PI for each municipality.

  14. Developmentally distinct MYB genes encode functionally equivalent proteins in Arabidopsis.

    PubMed

    Lee, M M; Schiefelbein, J

    2001-05-01

    The duplication and divergence of developmental control genes is thought to have driven morphological diversification during the evolution of multicellular organisms. To examine the molecular basis of this process, we analyzed the functional relationship between two paralogous MYB transcription factor genes, WEREWOLF (WER) and GLABROUS1 (GL1), in Arabidopsis. The WER and GL1 genes specify distinct cell types and exhibit non-overlapping expression patterns during Arabidopsis development. Nevertheless, reciprocal complementation experiments with a series of gene fusions showed that WER and GL1 encode functionally equivalent proteins, and their unique roles in plant development are entirely due to differences in their cis-regulatory sequences. Similar experiments with a distantly related MYB gene (MYB2) showed that its product cannot functionally substitute for WER or GL1. Furthermore, an analysis of the WER and GL1 proteins shows that conserved sequences correspond to specific functional domains. These results provide new insights into the evolution of the MYB gene family in Arabidopsis, and, more generally, they demonstrate that novel developmental gene function may arise solely by the modification of cis-regulatory sequences.

  15. Nitrogen rate and application timing affect the yield and risk associated with stockpiling tall fescue for winter grazing

    USDA-ARS?s Scientific Manuscript database

    Stockpiled tall fescue can provide economical winter feed for grazing livestock in the mid-Atlantic of the United States. The objective of this study was to evaluate the effect of N rate and application timing on the yield of stockpiled tall fescue. Four N rates ranging from 0 to 120 lb N/acre wer...

  16. WEREWOLF and ENHANCER of GLABRA3 are interdependent regulators of the spatial expression pattern of GLABRA2 in Arabidopsis.

    PubMed

    Song, Sang-Kee; Kwak, Su-Hwan; Chang, Soo Chul; Schiefelbein, John; Lee, Myeong Min

    2015-11-06

    In multicellular organisms, cell fates are specified through differential regulation of transcription. Epidermal cell fates in the Arabidopsis thaliana root are precisely specified by several transcription factors, with the GLABRA2 (GL2) homeodomain protein acting at the farthest downstream in this process. To better understand the regulation of GL2 expression, we ectopically expressed WEREWOLF (WER) and ENHANCER OF GLABRA3 (EGL3) in various tissues and examined GL2 expression. Here we show that WER expressed ubiquitously in the root induced GL2 expression only in the root epidermis, whereas co-expression of WER and EGL3 induced GL2 expression in the corresponding tissues. We also found that GL3 accumulated in the nucleus at the early meristematic region and EGL3 accumulated later in the nucleus of epidermal cells. We further found that ectopic expression of WER and EGL3 in ground tissues inhibited GL2 expression in the epidermis. Our results suggest that the co-expression of WER and EGL3 is sufficient for driving GL2 and CPC expression. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  18. The Histone Chaperone NRP1 Interacts with WEREWOLF to Activate GLABRA2 in Arabidopsis Root Hair Development.

    PubMed

    Zhu, Yan; Rong, Liang; Luo, Qiang; Wang, Baihui; Zhou, Nana; Yang, Yue; Zhang, Chi; Feng, Haiyang; Zheng, Lina; Shen, Wen-Hui; Ma, Jinbiao; Dong, Aiwu

    2017-02-01

    NUCLEOSOME ASSEMBLY PROTEIN1 (NAP1) defines an evolutionarily conserved family of histone chaperones and loss of function of the Arabidopsis thaliana NAP1 family genes NAP1-RELATED PROTEIN1 ( NRP1 ) and NRP2 causes abnormal root hair formation. Yet, the underlying molecular mechanisms remain unclear. Here, we show that NRP1 interacts with the transcription factor WEREWOLF (WER) in vitro and in vivo and enriches at the GLABRA2 ( GL2 ) promoter in a WER-dependent manner. Crystallographic analysis indicates that NRP1 forms a dimer via its N-terminal α-helix. Mutants of NRP1 that either disrupt the α-helix dimerization or remove the C-terminal acidic tail, impair its binding to histones and WER and concomitantly lead to failure to activate GL2 transcription and to rescue the nrp1-1 nrp2-1 mutant phenotype. Our results further demonstrate that WER-dependent enrichment of NRP1 at the GL2 promoter is involved in local histone eviction and nucleosome loss in vivo. Biochemical competition assays imply that the association between NRP1 and histones may counteract the inhibitory effect of histones on the WER-DNA interaction. Collectively, our study provides important insight into the molecular mechanisms by which histone chaperones are recruited to target chromatin via interaction with a gene-specific transcription factor to moderate chromatin structure for proper root hair development. © 2017 American Society of Plant Biologists. All rights reserved.

  19. The Histone Chaperone NRP1 Interacts with WEREWOLF to Activate GLABRA2 in Arabidopsis Root Hair Development

    PubMed Central

    Rong, Liang; Luo, Qiang; Wang, Baihui; Zhou, Nana; Zhang, Chi; Feng, Haiyang

    2017-01-01

    NUCLEOSOME ASSEMBLY PROTEIN1 (NAP1) defines an evolutionarily conserved family of histone chaperones and loss of function of the Arabidopsis thaliana NAP1 family genes NAP1-RELATED PROTEIN1 (NRP1) and NRP2 causes abnormal root hair formation. Yet, the underlying molecular mechanisms remain unclear. Here, we show that NRP1 interacts with the transcription factor WEREWOLF (WER) in vitro and in vivo and enriches at the GLABRA2 (GL2) promoter in a WER-dependent manner. Crystallographic analysis indicates that NRP1 forms a dimer via its N-terminal α-helix. Mutants of NRP1 that either disrupt the α-helix dimerization or remove the C-terminal acidic tail, impair its binding to histones and WER and concomitantly lead to failure to activate GL2 transcription and to rescue the nrp1-1 nrp2-1 mutant phenotype. Our results further demonstrate that WER-dependent enrichment of NRP1 at the GL2 promoter is involved in local histone eviction and nucleosome loss in vivo. Biochemical competition assays imply that the association between NRP1 and histones may counteract the inhibitory effect of histones on the WER-DNA interaction. Collectively, our study provides important insight into the molecular mechanisms by which histone chaperones are recruited to target chromatin via interaction with a gene-specific transcription factor to moderate chromatin structure for proper root hair development. PMID:28138017

  20. Determinants of embryo development and quality in beef cattle: Effect of pre-ovulatory follicle size, CL volume, and serum concentrations of progesterone

    USDA-ARS?s Scientific Manuscript database

    Previous research indicates cows ovulating a small dominant follicle (<_ 12 mm) had lower pregnancy rates than cows ovulating a large follicle (> 12 mm). We hypothesized cows ovulating a small follicle would have delayed embryo development and decreased embryo quality. Objectives of this study wer...

  1. Increased abundance of aromatase and follicle stimulating hormone receptor mRNA and decreased insulin-like growth factor-2 receptor mRNA in small ovarian follicles of cattle selected for twin births

    USDA-ARS?s Scientific Manuscript database

    Cattle genetically selected for twin ovulations and births (Twinner) exhibit increased ovarian follicular development, increased ovulation rate, and greater blood and follicular fluid IGF 1 concentrations compared with contemporary cattle not selected for twins (Control). Experimental objectives wer...

  2. Cell Pattern in the Arabidopsis Root Epidermis Determined by Lateral Inhibition with Feedback

    PubMed Central

    Lee, Myeong Min; Schiefelbein, John

    2002-01-01

    In the root epidermis of Arabidopsis, hair and nonhair cell types are specified in a distinct position-dependent pattern. Here, we show that transcriptional feedback loops between the WEREWOLF (WER), CAPRICE (CPC), and GLABRA2 (GL2) genes help to establish this pattern. Positional cues bias the expression of the WER MYB gene, leading to the induction of CPC and GL2 in cells located in a particular position (N) and adoption of the nonhair fate. The truncated MYB encoded by CPC mediates a lateral inhibition mechanism to negatively regulate WER, GL2, and its own gene in the alternative position (H) to induce the hair fate. These results provide a molecular genetic framework for understanding the determination of a cell-type pattern in plants. PMID:11910008

  3. Phosphatidic acid interacts with a MYB transcription factor and regulates its nuclear localization and function in Arabidopsis.

    PubMed

    Yao, Hongyan; Wang, Geliang; Guo, Liang; Wang, Xuemin

    2013-12-01

    Phosphatidic acid (PA) has emerged as a class of cellular mediators involved in various cellular and physiological processes, but little is known about its mechanism of action. Here we show that PA interacts with werewolf (WER), a R2R3 MYB transcription factor involved in root hair formation. The PA-interacting region is confined to the end of the R2 subdomain. The ablation of the PA binding motif has no effect on WER binding to DNA, but abolishes its nuclear localization and its function in regulating epidermal cell fate. Inhibition of PA production by phospholipase Dζ also suppresses WER's nuclear localization, root hair formation, and elongation. These results suggest a role for PA in promoting protein nuclear localization.

  4. Cell pattern in the Arabidopsis root epidermis determined by lateral inhibition with feedback.

    PubMed

    Lee, Myeong Min; Schiefelbein, John

    2002-03-01

    In the root epidermis of Arabidopsis, hair and nonhair cell types are specified in a distinct position-dependent pattern. Here, we show that transcriptional feedback loops between the WEREWOLF (WER), CAPRICE (CPC), and GLABRA2 (GL2) genes help to establish this pattern. Positional cues bias the expression of the WER MYB gene, leading to the induction of CPC and GL2 in cells located in a particular position (N) and adoption of the nonhair fate. The truncated MYB encoded by CPC mediates a lateral inhibition mechanism to negatively regulate WER, GL2, and its own gene in the alternative position (H) to induce the hair fate. These results provide a molecular genetic framework for understanding the determination of a cell-type pattern in plants.

  5. WEREWOLF, a MYB-related protein in Arabidopsis, is a position-dependent regulator of epidermal cell patterning.

    PubMed

    Lee, M M; Schiefelbein, J

    1999-11-24

    The formation of the root epidermis of Arabidopsis provides a simple and elegant model for the analysis of cell patterning. A novel gene, WEREWOLF (WER), is described here that is required for position-dependent patterning of the epidermal cell types. The WER gene encodes a MYB-type protein and is preferentially expressed within cells destined to adopt the non-hair fate. Furthermore, WER is shown to regulate the position-dependent expression of the GLABRA2 homeobox gene, to interact with a bHLH protein, and to act in opposition to the CAPRICE MYB. These results suggest a simple model to explain the specification of the two root epidermal cell types, and they provide insight into the molecular mechanisms used to control cell patterning.

  6. Omnidirectional Internal Fixation by Double Approaches for Treating Rüedi-Allgöwer Type III Pilon Fractures.

    PubMed

    Dai, Chong-Hua; Sun, Jun; Chen, Kun-Quan; Zhang, Hui-Bo

    In the present study, we explored the effectiveness and complications of omnidirectional internal fixation using a double approach for treating Rüedi-Allgöwer type III pilon fractures. A retrospective analysis was performed of 19 cases of Rüedi-Allgöwer type III unilateral closed pilon fracture. With preoperative preparation and correct surgical timing, the reduction was performed using anteromedial and posterolateral approaches, and the fracture fragments were fixed by omnidirectional internal fixation. Imaging evaluation was performed using the Burwell-Charnley scoring system. The Johner-Wruhs scoring system was used to assess the functional status of the patients. A comprehensive evaluation of efficacy was performed using a 5-point Likert score. The complications were also recorded and analyzed. All patients were followed up for an average of 16.2 months. The operative incisions of 15 cases healed by primary intent and with delayed healing in 4. All patients had achieved bony union at an average of 16 weeks postoperatively. No deep infection, broken nail or withdrawn nail, exposed plate, or skin flap necrosis occurred. The Burwell-Charnley imaging evaluation showed that 14 patients had anatomic reduction of the articular surface and 5 had acceptable reduction. Using the Johner-Wruhs scoring system, the results were excellent for 8, good for 7, fair for 2, and poor for 2 patients; the combined rate of excellent and good results was 78.9%. The Likert score of efficacy self-reported by the patients was 3 to 4 points for 12 patients, 2 points for 4 patients, and 0 to 1 point for 3 patients. The Likert score of therapeutic efficacy reported by the physicians was 3 to 4 points for 10 patients, 2 points for 5 patients, and 0 to 1 point for 4 patients. Omnidirectional internal fixation using double approaches was an effective method to treat Rüedi-Allgöwer type III pilon fractures with satisfactory reduction and rigid fixation, good joint function recovery, and few complications. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Parametric study of transport aircraft systems cost and weight

    NASA Technical Reports Server (NTRS)

    Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.

    1977-01-01

    The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.

  8. West Europe Report Tables of Contents JPRS-WER-86-064, 2 July 1986 JPRS-WER-86-124, 31 Dec 1986.

    DTIC Science & Technology

    1987-03-30

    Exercise (Lars Porne ; SVENSKA DAGBLADET, 13 May 86) 59 Defense Research Institute To Study Officer Drain Causes (Richard Aschberg; SVENSKA DAGBLADET...DAGENS NYHETER, 8 Jun 86) 67 Entire Harsfjarden To Be Blocked With Nets To Halt Subs (Lars Porne ; SVENSKA DAGBLADET, 30 May 86) 72 Private Group...Fundamentalists Abroad Disagree on Ideology (HURRIYET, 20 Aug 86) 49 Fundamentalist Propaganda in Greeting Cards, Videos (HURRIYET, 16 Aug 86

  9. Assessment of Global Wind Energy Resource Utilization Potential

    NASA Astrophysics Data System (ADS)

    Ma, M.; He, B.; Guan, Y.; Zhang, H.; Song, S.

    2017-09-01

    Development of wind energy resource (WER) is a key to deal with climate change and energy structure adjustment. A crucial issue is to obtain the distribution and variability of WER, and mine the suitable location to exploit it. In this paper, a multicriteria evaluation (MCE) model is constructed by integrating resource richness and stability, utilization value and trend of resource, natural environment with weights. The global resource richness is assessed through wind power density (WPD) and multi-level wind speed. The utilizable value of resource is assessed by the frequency of effective wind. The resource stability is assessed by the coefficient of variation of WPD and the frequency of prevailing wind direction. Regression slope of long time series WPD is used to assess the trend of WER. All of the resource evaluation indicators are derived from the atmospheric reanalysis data ERA-Interim with spatial resolution 0.125°. The natural environment factors mainly refer to slope and land-use suitability, which are derived from multi-resolution terrain elevation data 2010 (GMTED 2010) and GlobalCover2009. Besides, the global WER utilization potential map is produced, which shows most high potential regions are located in north of Africa. Additionally, by verifying that 22.22 % and 48.8 9% operational wind farms fall on medium-high and high potential regions respectively, the result can provide a basis for the macroscopic siting of wind farm.

  10. Searching for plant root traits to improve soil cohesion and resist soil erosion

    NASA Astrophysics Data System (ADS)

    De Baets, Sarah; Smyth, Kevin; Denbigh, Tom; Weldon, Laura; Higgins, Ben; Matyjaszkiewicz, Antoni; Meersmans, Jeroen; Chenchiah, Isaac; Liverpool, Tannie; Quine, Tim; Grierson, Claire

    2017-04-01

    Soil erosion poses a serious threat to future food and environmental security. Soil erosion protection measures are therefore of great importance for soil conservation and food security. Plant roots have proven to be very effective in stabilizing the soil and protecting the soil against erosion. However, no clear insights are yet obtained into the root traits that are responsible for root-soil cohesion. This is important in order to better select the best species for soil protection. Research using Arabidopsis mutants has made great progress towards explaining how root systems are generated by growth, branching, and responses to gravity, producing mutants that affect root traits. In this study, the performance of selected Arabidopsis mutants is analyzed in three root-soil cohesion assays. Measurements of detachment, uprooting force and soil detachment are here combined with the microscopic analysis of root properties, such as the presence, length and density of root hairs in this case. We found that Arabidopsis seedlings with root hairs (wild type, wer myb23, rsl4) were more difficult to detach from gel media than hairless (cpc try) or short haired (rsl4, rhd2) roots. Hairy roots (wild type, wer myb23) on mature, non-reproductive rosettes were more difficult to uproot from compost or clay soil than hairless roots (cpc try). At high root densities, erosion rates from soils with hairless roots (cpc try) were as much as 10 times those seen from soils occupied by roots with hairs (wer myb23, wild type). We find therefore root hairs play a significant role in root-soil cohesion and in minimizing erosion. This framework and associated suite of experimental assays demonstrates its ability to measure the effect of any root phenotype on the effectiveness of plant roots in binding substrates and reducing erosion.

  11. Reduction of extinction and reinstatement of cocaine seeking by wheel running in female rats.

    PubMed

    Zlebnik, Natalie E; Anker, Justin J; Gliddon, Luke A; Carroll, Marilyn E

    2010-03-01

    Previous work has shown that wheel running reduced the maintenance of cocaine self-administration in rats. In the present study, the effect of wheel running on extinction and reinstatement of cocaine seeking was examined. Female rats were trained to run in a wheel during 6-h sessions, and they were then catheterized and placed in an operant conditioning chamber where they did not have access to the wheel but were allowed to self-administer iv cocaine. Subsequently, rats were divided into four groups and were tested on the extinction and reinstatement of cocaine seeking while they had varying access to a wheel in an adjoining compartment. The four groups were assigned to the following wheel access conditions: (1) wheel running during extinction and reinstatement (WER), (2) wheel running during extinction and a locked wheel during reinstatement (WE), (3) locked wheel during extinction and wheel running during reinstatement (WR), and (4) locked wheel during extinction and reinstatement (WL). WE and WR were retested later to examine the effect of one session of wheel access on cocaine-primed reinstatement. There were no group differences in wheel revolutions, in rate of acquisition of cocaine self-administration, or in responding during maintenance when there was no wheel access. However, during extinction, WE and WER responded less than WR and WL. WR and WER had lower cocaine-primed reinstatement than WE and WL. One session of wheel exposure in WE also suppressed cocaine-primed reinstatement. Wheel running immediately and effectively reduced cocaine-seeking behavior, but concurrent access to running was necessary. Thus, exercise is a useful and self-sustaining intervention to reduce cocaine-seeking behavior.

  12. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression testing.

  13. Is Sordac’s Rapid Acquisition Process Best Prepared To Field Solutions For Future Technological Challenges

    DTIC Science & Technology

    2016-03-31

    approximately 700-800 capability submissions each year with about 25-30% resulting in further review.16 SORDAC has also partnered with the Doolittle ...Institute to run ‘SofWerX,’ a technology incubator in downtown Tampa. The Doolittle Institute is a Florida non-profit with a charter, “to create an... Doolittle Institute,” accessed 3 March 2016, http://doolittleinstitute.org/. 18 “SofWerX,” accessed 28 February 2016, www.sofwerx.org. (website

  14. The P-38 Lightning Aircraft: Lessons Learned for Future Weapon Systems Development

    DTIC Science & Technology

    2010-04-01

    PMBOK TEL \\ u.s. WER List of ~cronyms iv Brake Horse Power Design-Build Team District of Columbia Department of Defense Department of...record. Despite unresolved issues like the flap and brake system problems and limited test hours, on 11 February 1939, Lieutenant Kelsey flew the XP-38...engine, giving the P-3 8 engines a 1425 brake horse power (BHP)22 rating. However, limitations of the integral wing leading edge intercoolers23 could

  15. Regulation of CAPRICE transcription by MYB proteins for root epidermis differentiation in Arabidopsis.

    PubMed

    Koshino-Kimura, Yoshihiro; Wada, Takuji; Tachibana, Tatsuhiko; Tsugeki, Ryuji; Ishiguro, Sumie; Okada, Kiyotaka

    2005-06-01

    Epidermal cell differentiation in Arabidopsis root is studied as a model system for understanding cell fate specification. Two types of MYB-related transcription factors are involved in this cell differentiation. One of these, CAPRICE (CPC), encoding an R3-type MYB protein, is a positive regulator of hair cell differentiation and is preferentially transcribed in hairless cells. We analyzed the regulatory mechanism of CPC transcription. Deletion analyses of the CPC promoter revealed that hairless cell-specific transcription of the CPC gene required a 69 bp sequence, and a tandem repeat of this region was sufficient for its expression in epidermis. This region includes two MYB-binding sites, and the epidermis-specific transcription of CPC was abolished when base substitutions were introduced in these sites. We showed by gel mobility shift experiments and by yeast one-hybrid assay that WEREWOLF (WER), which is an R2R3-type MYB protein, directly binds to this region. We showed that WER also binds to the GL2 promoter region, indicating that WER directly regulates CPC and GL2 transcription by binding to their promoter regions.

  16. TORNADO1 regulates root epidermal patterning through the WEREWOLF pathway in Arabidopsis thaliana.

    PubMed

    Kwak, Su-Hwan; Song, Sang-Kee; Lee, Myeong Min; Schiefelbein, John

    2015-01-01

    Cell fate in the root epidermis of Arabidopsis thaliana is determined in a position-dependent manner. SCRAMBLED (SCM), an atypical leucine-rich repeat receptor-like kinase, mediates this positional regulation via its effect on WEREWOLF (WER) expression, and subsequently, its downstream transcription factor, GLABRA2 (GL2), which are required for nonhair cell development. Previously, TORNADO1 (TRN1), a plant-specific protein with a leucine-rich repeat ribonuclease inhibitor-like domain, was shown to be required for proper epidermal patterning in Arabidopsis roots. In this work, we analyzed the possible involvement of TRN1 in the known root epidermal gene network. We discovered that the trn1 mutant caused the ectopic expression of WER and the randomized expression of GL2 and EGL3. This suggests that TRN1 regulates the position-dependent cell fate determination by affecting WER expression in Arabidopsis root epidermis. Additionally, the distinct phenotypes of the aerial parts of the trn1-t and scm-2 mutant suggest that TRN1 and SCM might have different functions in the development of aerial parts.

  17. TORNADO1 regulates root epidermal patterning through the WEREWOLF pathway in Arabidopsis thaliana

    PubMed Central

    Kwak, Su-Hwan; Song, Sang-Kee; Lee, Myeong Min; Schiefelbein, John

    2015-01-01

    Cell fate in the root epidermis of Arabidopsis thaliana is determined in a position-dependent manner. SCRAMBLED (SCM), an atypical leucine-rich repeat receptor-like kinase, mediates this positional regulation via its effect on WEREWOLF (WER) expression, and subsequently, its downstream transcription factor, GLABRA2 (GL2), which are required for nonhair cell development. Previously, TORNADO1 (TRN1), a plant-specific protein with a leucine-rich repeat ribonuclease inhibitor-like domain, was shown to be required for proper epidermal patterning in Arabidopsis roots. In this work, we analyzed the possible involvement of TRN1 in the known root epidermal gene network. We discovered that the trn1 mutant caused the ectopic expression of WER and the randomized expression of GL2 and EGL3. This suggests that TRN1 regulates the position-dependent cell fate determination by affecting WER expression in Arabidopsis root epidermis. Additionally, the distinct phenotypes of the aerial parts of the trn1-t and scm-2 mutant suggest that TRN1 and SCM might have different functions in the development of aerial parts. PMID:26451798

  18. Primary Ankle Arthrodesis for Severely Comminuted Tibial Pilon Fractures.

    PubMed

    Al-Ashhab, Mohamed E

    2017-03-01

    Management of severely comminuted, complete articular tibial pilon fractures (Rüedi and Allgöwer type III) remains a challenge, with few treatment options providing good clinical outcomes. Twenty patients with severely comminuted tibial pilon fractures underwent primary ankle arthrodesis with a retrograde calcaneal nail and autogenous fibular bone graft. The fusion rate was 100% and the varus malunion rate was 10%. Fracture union occurred at a mean of 16 weeks (range, 13-18 weeks) postoperatively. Primary ankle arthrodesis is a successful method for treating highly comminuted tibial pilon fractures, having a low complication rate and a high satisfaction score. [Orthopedics. 2017; 40(2):e378-e381.]. Copyright 2016, SLACK Incorporated.

  19. Arabidopsis AIP1-2 restricted by WER-mediated patterning modulates planar polarity

    PubMed Central

    Kiefer, Christian S.; Claes, Andrea R.; Nzayisenga, Jean-Claude; Pietra, Stefano; Stanislas, Thomas; Hüser, Anke; Ikeda, Yoshihisa; Grebe, Markus

    2015-01-01

    The coordination of cell polarity within the plane of the tissue layer (planar polarity) is crucial for the development of diverse multicellular organisms. Small Rac/Rho-family GTPases and the actin cytoskeleton contribute to planar polarity formation at sites of polarity establishment in animals and plants. Yet, upstream pathways coordinating planar polarity differ strikingly between kingdoms. In the root of Arabidopsis thaliana, a concentration gradient of the phytohormone auxin coordinates polar recruitment of Rho-of-plant (ROP) to sites of polar epidermal hair initiation. However, little is known about cytoskeletal components and interactions that contribute to this planar polarity or about their relation to the patterning machinery. Here, we show that ACTIN7 (ACT7) represents a main actin isoform required for planar polarity of root hair positioning, interacting with the negative modulator ACTIN-INTERACTING PROTEIN1-2 (AIP1-2). ACT7, AIP1-2 and their genetic interaction are required for coordinated planar polarity of ROP downstream of ethylene signalling. Strikingly, AIP1-2 displays hair cell file-enriched expression, restricted by WEREWOLF (WER)-dependent patterning and modified by ethylene and auxin action. Hence, our findings reveal AIP1-2, expressed under control of the WER-dependent patterning machinery and the ethylene signalling pathway, as a modulator of actin-mediated planar polarity. PMID:25428588

  20. Study of Resource Recovery and Epidemiology in an Anaerobic Digester

    NASA Technical Reports Server (NTRS)

    Li, K. Y.; Cao, Song; Hunt, M. D.; Fu, Xuping

    1995-01-01

    Three 4-liter packed bed anaerobic digesters were fabricated and operated at 35 degrees C, pH around 7, and hydraulic retention time (HRT) of 20, 10 and 5 days to study the resource recovery and epidemiology in a controlled ecological life support system (CELSS). A simulated wastewater, consisted of shower water, clothwash water, dishwasher water, handwash water, and urine flush water was used as the feeding solution. Under steady-state operation, chemical oxygen demand (COD), total organic carbon (TOC), pH, nitrogen, phosphorus, and potassium wer monitored in the digester input and output solutions. The volume and the CH4/CO2 ratios in the biogas produced from the anaerobic digesters were measured. The results indicate about 90 percent of TOC is converted while only 5-8 percent of N-P-K are consumed in the digester. A multi-drug resistant strain of Salmonella choleraesuis was used as the indicator bacterium in the epidemiology study. The levels of Salmonella choleraesuis in the influent and the effluent wer determined and decimal decay rate constants, k(d), were estimated. The k(d) values were greater at higher initial doses than lower doses for the same HR, and greater for batch digestion (7.89/d) than for continuous digestion (4.28, 3.82, and 3.82/d for 20, 10, and 5 d HRT, respectively).

  1. Arabidopsis AIP1-2 restricted by WER-mediated patterning modulates planar polarity.

    PubMed

    Kiefer, Christian S; Claes, Andrea R; Nzayisenga, Jean-Claude; Pietra, Stefano; Stanislas, Thomas; Hüser, Anke; Ikeda, Yoshihisa; Grebe, Markus

    2015-01-01

    The coordination of cell polarity within the plane of the tissue layer (planar polarity) is crucial for the development of diverse multicellular organisms. Small Rac/Rho-family GTPases and the actin cytoskeleton contribute to planar polarity formation at sites of polarity establishment in animals and plants. Yet, upstream pathways coordinating planar polarity differ strikingly between kingdoms. In the root of Arabidopsis thaliana, a concentration gradient of the phytohormone auxin coordinates polar recruitment of Rho-of-plant (ROP) to sites of polar epidermal hair initiation. However, little is known about cytoskeletal components and interactions that contribute to this planar polarity or about their relation to the patterning machinery. Here, we show that ACTIN7 (ACT7) represents a main actin isoform required for planar polarity of root hair positioning, interacting with the negative modulator ACTIN-INTERACTING PROTEIN1-2 (AIP1-2). ACT7, AIP1-2 and their genetic interaction are required for coordinated planar polarity of ROP downstream of ethylene signalling. Strikingly, AIP1-2 displays hair cell file-enriched expression, restricted by WEREWOLF (WER)-dependent patterning and modified by ethylene and auxin action. Hence, our findings reveal AIP1-2, expressed under control of the WER-dependent patterning machinery and the ethylene signalling pathway, as a modulator of actin-mediated planar polarity. © 2015. Published by The Company of Biologists Ltd.

  2. Establishment and evaluation of the Japanese edition of the Weekly Epidemiological Record (WER) website by the Faculty of Health Sciences of Kobe University School of Medicine.

    PubMed

    Sakaguchi, Hiroko; Usami, Makoto; Ando, Hiroshi; Ohata, Atsushi

    2007-11-01

    To report on the establishment of the Japanese version website of the Weekly Epidemiological Record (WER) by the faculty and to evaluate its accessibility and the educational outcome: all articles from the WER since 2000 have been translated into Japanese by graduate students with teachers' guidance, verified by the committee members, and delivered to the website. The server log files and retrieval keywords were analyzed using Analog 6.0. An on-line questionnaire survey of visitors to the website was performed. Opinion sheets reported by the students for translation were evaluated as the educational outcome. Over 6 years, there were 820,571 requests to the website and, the number of requests increased with disease outbreaks. According to domain analysis, most requests were during daytime on weekdays, and the website was utilized by users in educational institutions and the Japanese government and by overseas visitors. Among respondents to the questionnaire, 47% were laypersons and 69% found the website easy to understand. SARS and HIV/AIDS were the terms most frequently used for retrieval. The students recognized the importance of the World Health Organization (WHO) and had broadened their perspective on international health. The website is useful for Japanese. The translating process was effective for international health education.

  3. Phosphatidic Acid Interacts with a MYB Transcription Factor and Regulates Its Nuclear Localization and Function in Arabidopsis[C][W

    PubMed Central

    Yao, Hongyan; Wang, Geliang; Guo, Liang; Wang, Xuemin

    2013-01-01

    Phosphatidic acid (PA) has emerged as a class of cellular mediators involved in various cellular and physiological processes, but little is known about its mechanism of action. Here we show that PA interacts with WEREWOLF (WER), a R2R3 MYB transcription factor involved in root hair formation. The PA-interacting region is confined to the end of the R2 subdomain. The ablation of the PA binding motif has no effect on WER binding to DNA, but abolishes its nuclear localization and its function in regulating epidermal cell fate. Inhibition of PA production by phospholipase Dζ also suppresses WER’s nuclear localization, root hair formation, and elongation. These results suggest a role for PA in promoting protein nuclear localization. PMID:24368785

  4. Modality dependency of familiarity ratings of Japanese words.

    PubMed

    Amano, S; Kondo, T; Kakehi, K

    1995-07-01

    Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.

  5. Changes in potato phenylpropanoids during tuber development

    USDA-ARS?s Scientific Manuscript database

    Phenylpropanoid metabolite and transcript expression during different developmental stages were examined in field grown potatoes. Carbohydrate and shikimic acid metabolism was assessed to determine how tuber primary metabolism influences phenylpropanoid metabolism. Phenylpropanoid concentrations wer...

  6. Highway noise study : final report.

    DOT National Transportation Integrated Search

    1974-05-01

    Three noise level measurement systems have been investigated and made operational. They are: : 1. Graphic Level Recording Method : 2. Periodic Sampling Method : 3. Environmental Noise Classifier Method : The relative accuracy of the three systems wer...

  7. The MYB23 Gene Provides a Positive Feedback Loop for Cell Fate Specification in the Arabidopsis Root Epidermis[C][W

    PubMed Central

    Kang, Yeon Hee; Kirik, Victor; Hulskamp, Martin; Nam, Kyoung Hee; Hagely, Katherine; Lee, Myeong Min; Schiefelbein, John

    2009-01-01

    The specification of cell fates during development requires precise regulatory mechanisms to ensure robust cell type patterns. Theoretical models of pattern formation suggest that a combination of negative and positive feedback mechanisms are necessary for efficient specification of distinct fates in a field of differentiating cells. Here, we examine the role of the R2R3-MYB transcription factor gene, AtMYB23 (MYB23), in the establishment of the root epidermal cell type pattern in Arabidopsis thaliana. MYB23 is closely related to, and is positively regulated by, the WEREWOLF (WER) MYB gene during root epidermis development. Furthermore, MYB23 is able to substitute for the function of WER and to induce its own expression when controlled by WER regulatory sequences. We also show that the MYB23 protein binds to its own promoter, suggesting a MYB23 positive feedback loop. The localization of MYB23 transcripts and MYB23-green fluorescent protein (GFP) fusion protein, as well as the effect of a chimeric MYB23-SRDX repressor construct, links MYB23 function to the developing non-hair cell type. Using mutational analyses, we find that MYB23 is necessary for precise establishment of the root epidermal pattern, particularly under conditions that compromise the cell specification process. These results suggest that MYB23 participates in a positive feedback loop to reinforce cell fate decisions and ensure robust establishment of the cell type pattern in the Arabidopsis root epidermis. PMID:19395683

  8. The MYB23 gene provides a positive feedback loop for cell fate specification in the Arabidopsis root epidermis.

    PubMed

    Kang, Yeon Hee; Kirik, Victor; Hulskamp, Martin; Nam, Kyoung Hee; Hagely, Katherine; Lee, Myeong Min; Schiefelbein, John

    2009-04-01

    The specification of cell fates during development requires precise regulatory mechanisms to ensure robust cell type patterns. Theoretical models of pattern formation suggest that a combination of negative and positive feedback mechanisms are necessary for efficient specification of distinct fates in a field of differentiating cells. Here, we examine the role of the R2R3-MYB transcription factor gene, AtMYB23 (MYB23), in the establishment of the root epidermal cell type pattern in Arabidopsis thaliana. MYB23 is closely related to, and is positively regulated by, the WEREWOLF (WER) MYB gene during root epidermis development. Furthermore, MYB23 is able to substitute for the function of WER and to induce its own expression when controlled by WER regulatory sequences. We also show that the MYB23 protein binds to its own promoter, suggesting a MYB23 positive feedback loop. The localization of MYB23 transcripts and MYB23-green fluorescent protein (GFP) fusion protein, as well as the effect of a chimeric MYB23-SRDX repressor construct, links MYB23 function to the developing non-hair cell type. Using mutational analyses, we find that MYB23 is necessary for precise establishment of the root epidermal pattern, particularly under conditions that compromise the cell specification process. These results suggest that MYB23 participates in a positive feedback loop to reinforce cell fate decisions and ensure robust establishment of the cell type pattern in the Arabidopsis root epidermis.

  9. SABRE is required for stabilization of root hair patterning in Arabidopsis thaliana.

    PubMed

    Pietra, Stefano; Lang, Patricia; Grebe, Markus

    2015-03-01

    Patterned differentiation of distinct cell types is essential for the development of multicellular organisms. The root epidermis of Arabidopsis thaliana is composed of alternating files of root hair and non-hair cells and represents a model system for studying the control of cell-fate acquisition. Epidermal cell fate is regulated by a network of genes that translate positional information from the underlying cortical cell layer into a specific pattern of differentiated cells. While much is known about the genes of this network, new players continue to be discovered. Here we show that the SABRE (SAB) gene, known to mediate microtubule organization, anisotropic cell growth and planar polarity, has an effect on root epidermal hair cell patterning. Loss of SAB function results in ectopic root hair formation and destabilizes the expression of cell fate and differentiation markers in the root epidermis, including expression of the WEREWOLF (WER) and GLABRA2 (GL2) genes. Double mutant analysis reveal that wer and caprice (cpc) mutants, defective in core components of the epidermal patterning pathway, genetically interact with sab. This suggests that SAB may act on epidermal patterning upstream of WER and CPC. Hence, we provide evidence for a role of SAB in root epidermal patterning by affecting cell-fate stabilization. Our work opens the door for future studies addressing SAB-dependent functions of the cytoskeleton during root epidermal patterning. © 2014 The Authors. Physiologia Plantarum published by John Wiley & Sons Ltd on behalf of Scandinavian Plant Physiology Society.

  10. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  11. Development of Temporary Rumble Strip Specifications

    DOT National Transportation Integrated Search

    2016-02-01

    The objective of this study was to develop specifications for portable reusable temporary rumble strips for their applications in different work zone settings in Kansas. A detailed literature review, a survey of practice, and a closed-course test wer...

  12. Automotive Fuel Economy and Emissions Experimental Data

    DOT National Transportation Integrated Search

    1979-02-01

    The purpose of this effort was to generate experimental data to support an assessment of the relationship between automobile fuel economy and emission control systems. Tests were made at both the engine and vehicle levels. Detailed investigations wer...

  13. Hydroplaning on multi lane facilities.

    DOT National Transportation Integrated Search

    2012-11-01

    The primary findings of this research can be highlighted as follows. Models that provide estimates of wet weather speed reduction, as well as analytical and empirical methods for the prediction of hydroplaning speeds of trailers and heavy trucks, wer...

  14. Development of Temporary Rumble Strip Specifications : [Technical Summary

    DOT National Transportation Integrated Search

    2016-02-01

    The objective of this study was to develop specifications for portable reusable temporary rumble strips for their applications in different work zone settings in Kansas. A detailed literature review, a survey of practice, and a closed-course test wer...

  15. Field evaluation of highway safety hardware maintenance guidelines.

    DOT National Transportation Integrated Search

    1987-01-01

    The objective of this study was to evaluate with field tests, a procedure developed for the Federal Highway Administration for determining frequencies at which highway safety hardware needs to be inspected and repaired. The frequencies arrived at wer...

  16. MODELING OF MACROSCALE AGRICULTURAL ELEMENTS IN PESTICIDE EXPOSURE

    EPA Science Inventory

    Yuma County, Arizona, is the site of year around agriculture. To understand the role of agricultural pesticide exposures experienced by children, urinary metabolite concentrations were compared with agricultural use of pesticides. The urinary metabolite and household data wer...

  17. BATS IN AMERICAN BRIDGES: RESOURCE PUBLICATION NO. 4

    DOT National Transportation Integrated Search

    1999-01-01

    Bridges and culverts were evaluated as bat roosting habitat in 25 states at elevations from sea level to 10,000 feet. Field surveys were conducted at 2,421 highway structures. Scientific literature was reviewed, and local biologists and engineers wer...

  18. LOW COST SOLIDIFICATION/STABILIZATION TREATMENT FOR SOILS CONTAMINATED WITH DIOXIN, PCP AND CREOSOTE

    EPA Science Inventory

    The USEPA's NRMRL conducted successful treatability tests of innovative solidification/stabilization (S/S) formulations to treat soils contaminated with dioxins, pentachlorophenol (PCP), and creosote from four wood preserving sites. Formulations developed during these studies wer...

  19. Sparganothis fruitworm phenology

    USDA-ARS?s Scientific Manuscript database

    Sparganothis sulfureana, also known as the Sparganothis fruitworm, is one of the most significant, ubiquitous pests of cranberry in North America. To better predict its development in the field, basic information on its temperature-mediated growth is needed. Under a range of temperatures, larvae wer...

  20. METHOD DEVELOPMENT FOR THE DETERMINATION OF FORMALDEHYDE IN SAMPLES OF ENVIRONMENTAL ORIGIN

    EPA Science Inventory

    An analytical method was developed for the determination of formaldehyde in samples of environmental origin. After a review of the current literature, five candidate methods involving chemical derivatization were chosen for evaluation. The five derivatization reagents studied wer...

  1. Evaluating the Relationship between Equilibrium Passive Sampler Uptake and Aquatic Organism Bioaccumulation..

    EPA Science Inventory

    This review evaluates passive sampler uptake of hydrophobic organic contaminants (HOCs) in water column and interstitial water exposures as a surrogate for organism bioaccumulation. Fifty-four studies were found where both passive sampler uptake and organism bioaccumulation wer...

  2. Evaluating the Performance of Household Liquefied Petroleum Gas Cookstoves

    EPA Science Inventory

    Liquefied Petroleum Gas (LPG) cookstoves are considered to be an important solution for mitigating household air pollution; however, their performance has rarely been evaluated. To fill the data and knowledge gaps in this important area, laboratory tests (number of tests: 89) wer...

  3. Cost analysis and environmental impact of nonthermal technologies

    USDA-ARS?s Scientific Manuscript database

    The cost of high pressure processing (HPP) orange juice and its environmental impact were estimated. In addition, the environmental impact of pulsed electric fields (PEF) and thermal pasteurization were assessed for comparison. The cost analysis was based on commercial processing conditions that wer...

  4. Discrimination among Panax species using spectral fingerprinting

    USDA-ARS?s Scientific Manuscript database

    Spectral fingerprints of samples of three Panax species (P. quinquefolius L., P. ginseng, and P. notoginseng) were acquired using UV, NIR, and MS spectrometry. With principal components analysis (PCA), all three methods allowed visual discrimination between all three species. All three methods wer...

  5. NEKTON HABITAT QUALITY AT SHALLOW-WATER SITES IN TWO RHODE ISLAND COASTAL SYSTEMS

    EPA Science Inventory

    We evaluated nekton habitat quality at five shallow-water sites in two Rhode Island systems by comparing nekton densities and biomass, number of species, prey availability and feeding, and abundance of winter flounder Pseudopleuronectes americanus. Nekton density and biomass wer...

  6. ENANTIOMERIC COMPOSITION OF CHIRAL POLYCHLORINATED BIPHENYL ATROPISOMERS IN AQUATIC BED SEDIMENT

    EPA Science Inventory

    Enantiomeric ratios (ERs) for eight polychlorinated biphenyl (PCB) atropisomers were measured in aquatic sediment from selected sites throughout the United States by using chiral gas chromatography/mass spectrometry. Nonracemic ERs for PCBs 91, 95, 132, 136, 149, 174, and 176 wer...

  7. Environmental influeneces on growth and reprodcution of invasive Commelina benghalensis

    USDA-ARS?s Scientific Manuscript database

    Commelina benghalensis (Benghal dayflower) is a noxious weed that is invading agricultural systems in the southeastern United States. We investigated the influences of nutrition, light, and photoperiod on growth and reproductive output of C. benghalensis. In the first experimental series, plants wer...

  8. Rapid Semi-Quantitative Surface Mapping of Airborne-Dispersed Chemicals Using Mass Spectrometry

    EPA Science Inventory

    Chemicals can be dispersed accidentally, deliberately, or by weather-related events. Rapid mapping of contaminant distributions is necessary to assess exposure risks and to plan remediation, when needed. Ten pulverized aspirin or NoDozTM tablets containing caffeine wer...

  9. PASSIVE SMOKING AND HEIGHT GROWTH OF PREADOLESCENT CHILDREN

    EPA Science Inventory

    The attained height and height growth of 9273 children participating in a longitudinal study of the health effects of air pollutants were analyzed to assess the association between passive exposure to cigarette smoke and physical growth between 6 and 11 years of age. Children wer...

  10. Best practices in emergency transportation operations preparedness and response : results of the FHWA workshop series, annotated.

    DOT National Transportation Integrated Search

    2006-12-18

    Between May 2002 and June 2005, the Federal Highway Administration (FHWA) and Booz Allen Hamilton conducted workshops on Transportation Operations Preparedness and Response in 30 regions across the United States. The objectives of these workshops wer...

  11. COST ANALYSIS OF PERMEABLE REACTIVE BARRIERS FOR REMEDIATION OF GROUND WATER

    EPA Science Inventory

    The U. S. Environmental Protection Agency's Office of Research and Development and its contractor have evaluated cost data from 22 sites where permeable reactive barriers (PRBs) have been utilized to remediate contaminated ground water resources. Most of the sites evaluated wer...

  12. Exposure of men to intermittent photic stimulation under simulated IFR conditions.

    DOT National Transportation Integrated Search

    1966-10-01

    Ten men were subjected to intermittent photic stimulation in an airplane cockpit in an environmental chamber by (1) a Grimes red rotating beacon (1.5 FPS), (2) an Air Guard strobe light (1.0 FPS) and (3) propeller flicker (10 FPS). IFR conditions wer...

  13. Genetic recombination in Venturia effusa, causal agent of pecan scab

    USDA-ARS?s Scientific Manuscript database

    Venturia effusa causes pecan scab, the most prevalent disease of pecan in the southeastern USA. Mating type idiomorphs were recently characterized and the sexual stage was subsequently produced in vitro. To investigate sexual reproduction and recombination of traits in V. effusa, select isolates wer...

  14. Design and information requirements for travel and tourism needs on scenic byways.

    DOT National Transportation Integrated Search

    1994-01-01

    The purpose of this study was to develop a system design and information evaluation process that could be used to review proposed or designated scenic byways. The process was intended to ensure that the geometric and traffic design of these roads wer...

  15. Susceptibility of Bagrada hilaris (Hemiptera: Pentatomidae) to insecticides in laboratory and greenhouse bioassays

    USDA-ARS?s Scientific Manuscript database

    Field-collected populations of Bagrada hilaris (Burmeister) from Coachella Valley, CA, Imperial Valley, CA, Riverside, CA and Yuma Valley, AZ, were evaluated for susceptibility to several active ingredients representing ten classes of insecticide chemistry. Both leaf-spray and leaf-dip bioassays wer...

  16. EFFECTS OF DOWICIDE (TRADE NAME) G-ST ON DEVELOPMENT OF EXPERIMENTAL ESTUARINE MACROBENTHIC COMMUNITIES

    EPA Science Inventory

    Aquaria containing clean sand received a continuous supply of flowing seawater from Santa Rosa Sound, Florida, mixed with known quantities of Dowicide G-ST(79% sodium pentachlorophenate) for thirteen weeks. The measured concentrations of pentachlorophenol (PCP) in the aquaria wer...

  17. Evaluation of Deer Mirrors for Reducing Deer-Vehicle Collisions

    DOT National Transportation Integrated Search

    1982-05-01

    Deer mirrors were placed in 12 random 0.5-mile test sections along 14.8 miles of I-95 between Topsham and Gardiner, Maine, to test the effectiveness of the mirrors in reducing deer-vehicle collisions. In nearly 4 years, 11 deer-vehicle collisions wer...

  18. Chemical modeling of boron adsorption by humic materials using the constant capacitance model

    USDA-ARS?s Scientific Manuscript database

    The constant capacitance surface complexation model was used to describe B adsorption behavior on reference Aldrich humic acid, humic acids from various soil environments, and dissolved organic matter extracted from sewage effluents. The reactive surface functional groups on the humic materials wer...

  19. Development and assessment of transparent soil and particle image velocimetry in dynamic soil-structure interaction

    DOT National Transportation Integrated Search

    2007-02-01

    This research combines Particle Image Velocimetry (PIV) and transparent soil to investigate the dynamic rigid block and soil interaction. In order to get a low viscosity pore fluid for the transparent soil, 12 different types of chemical solvents wer...

  20. ENVIRONMENTAL MONITORING AND MODELING ASSOCIATED WITH NATIONAL EMERGENCIES - EXPERIENCES GAINED FROM THE WORLD TRADE CENTER DISASTER

    EPA Science Inventory

    A workshop was held in Research Triangle Park, NC on November 18-19, 2002 to discuss scientific issues associated with measuring, modeling, and assessing exposure and risk to air containing contaminants generated as a result of national emergencies and disasters. Participants wer...

  1. Spring wheat gliadins: Have they changed in 100 years?

    USDA-ARS?s Scientific Manuscript database

    There have been many hard red spring (HRS) wheat cultivars released in North Dakota during the last 100 years. These cultivars have been improved for various characteristics such as, adaptation to weather conditions, high yield, and good milling and baking quality. The objectives of this study wer...

  2. ß-CARYOPHYLLINIC ACID: AN ATMOSPHERIC TRACER FOR ß-CARYOPHYLLENE SECONDARY ORGANIC AEROSOL

    EPA Science Inventory

    The chemical compositions of ambient PM2.5 samples, collected in Research Triangle Park, North Carolina, USA, and a sample of secondary organic aerosol, formed by irradiating a mixture of the sesquiterpene, ß-caryophyllene, and oxides of nitrogen in a smog chamber, wer...

  3. INFORMATION MANAGEMENT AND RELATED QUALITY ASSURANCE FOR A LARGE SCALE, MULTI-SITE RESEARCH PROJECT

    EPA Science Inventory

    During the summer of 2000, as part of a U.S. Environmental Protection Agency study designed to improve microbial water quality monitoring protocols at public beaches, over 11,000 water samples were collected at five selected beaches across the country. At each beach, samples wer...

  4. CARPET AS A SINK FOR CHLORPYRIFOS FOLLOWING THE USE OF TOTAL RELEASE AEROSOLS IN THE EPA TEST HOUSE

    EPA Science Inventory

    Pesticides may be found in homes from indoor applications to control pests or by their translocation from outdoor sources. Contaminants may persist adsorbed to surfaces and/or particles in "sinks" where over time they may dissociate as airborne vapors. Experiments wer...

  5. Active video games for youth: A systematic review

    USDA-ARS?s Scientific Manuscript database

    Whether a population level increase in physical activity (PA) is critical to reduce obesity in youth. Video games are highly popular and active video games (AVGs) have the potential to play a role in promoting youth PA. Studies on AVG play energy expenditure (EE) and maintenance of play in youth wer...

  6. Relationships Between Watershed Emergy Flow and Coastal New England Salt Marsh Structure, Function, and Condition

    EPA Science Inventory

    This study evaluated the link between watershed activities and salt marsh structure, function, and condition using spatial emergy flow density (areal empower density) in the watershed and field data from 10 tidal salt marshes in Narragansett Bay, RI. The field-collected data wer...

  7. Spectral analysis of large-eddy advection in ET from eddy covariance towers and a large weighting lysimeter

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration was continuously measured by an array of eddy covariance systems and large weighting lysimeter in a cotton field in Bushland, Texas. The advective divergence from both horizontal and vertical directions were measured through profile measurements above canopy. All storage terms wer...

  8. TRANSPORT AND TRANSFORMATION OF HEXAVALENT CHROMIUM THROUGH SOILS AND INTO GROUND WATER

    EPA Science Inventory

    A detailed characterization of the underlying and adjacent soils of a chrome-plating shop was performed to provide information on the extent of soil and aquifer contamination at the site and on the potential for off-site migration and environmental impact. Intact, moist cores wer...

  9. Using new techniques and applying them to sunflower's problems - what should we do next?

    USDA-ARS?s Scientific Manuscript database

    This presentation was part of a larger panel discussion with a largely non-scientific audience that included commodity marketers and farmers of sunflower. About 200 people were in attendance. Marker-assisted technologies were explained in lay terms to the audience. Their weaknesses and strengths wer...

  10. Biosystematics and evolutionary relationships of perennial Triticeae species revealed by genomic analyses

    USDA-ARS?s Scientific Manuscript database

    Literature published after 1984 were reviewed to address: (1) genome relationships among monogenomic diploid species, (2) progenitors of the unknown Y genome in Elymus polyploids, X in Thinopyrum intermedium, and Xm in Leymus, and (3) genome constitutions of some perennial Triticeae species that wer...

  11. Soil application of various biochars produced from both dry and wet pyrolysis

    USDA-ARS?s Scientific Manuscript database

    The objectives of this study were to 1) compare physico-chemical and thermal characteristics of swine manure-based hyrdochar and pyrochar, and 2) investigate greenhouse gas emission and groundwater pollution potentials of the swine hydrochar when used as a soil amendment. Dewatered swine solids wer...

  12. In vitro study on effect of germinated wheat on human breast cancer cells

    USDA-ARS?s Scientific Manuscript database

    This research investigated the possible anti-cancer effects of germinated wheat flours (GWF) on cell growth and apoptosis of human breast cancer cells. In a series of in vitro experiments, estrogen receptor-positive (MCF-7) and negative (MDA-MB-231) cells were cultured and treated with GWF that wer...

  13. Conditioned food aversion for control of poisoning by Ipomoea carnea subsp. fistulosa

    USDA-ARS?s Scientific Manuscript database

    Conditioned food aversion is a technique that can be used to train livestock to avoid ingestion of poisonous plants. This study tested the efficacy and durability of conditioned food aversion to eliminate goat’s consumption of Ipomoea carnea subsp. fistulosa. We used 14 young Moxotó goats, which wer...

  14. IDENTIFICATION OF POLLUTANTS IN A MUNICIPAL WELL USING MASS PEAK PROFILING OF THE MOLECULAR ION AND FRAGMENT IONS

    EPA Science Inventory


    An elevated incidence of childhood cancer was observed near a contaminated site. Trace amounts of several isomeric compounds were detected by gas chromatography/mass spectrometry (GC/MS) in a concentrated extract of municipal well water. No matching library mass spectra wer...

  15. Determination of toxicity in rabbits and corresponding detection of monofluoroacetate in four Palicourea (Rubiaceae) species from the Amazonas state, Brazil

    USDA-ARS?s Scientific Manuscript database

    Numerous monofluoroacetate (MFA)-containing plants in Brazil cause sudden death syndrome precipitated by exercise in livestock, which is characterized by loss of balance, ataxia, labored breathing, muscle tremors, and recumbence leading to death. Four species of Palicourea collected at six farms wer...

  16. TEMPORAL TRENDS IN ETHOXYRESORUFIN-O-DEETHYLASE ACITIVITY OF BROOK TROUT (SALVELINUS FONTINALIS) FED 2,3,7,8-TETRACHLORODIBENZO-P-DIOXIN

    EPA Science Inventory

    Changes in ethoxyresorufin-0-deethylase (EROD) activity were monitored through an extended 6-month dietary exposure to determine the relationship between EROD activity and uptake of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in brook trout, Salvelinus fontinalis. Brook trout wer...

  17. Quantifying the effects of wheat residue on severity of Stagonospora nodorum blotch and yield in winter wheat

    USDA-ARS?s Scientific Manuscript database

    Stagonospora nodorum blotch (SNB), caused by the ascomycete fungus Stagonospora nodorum, is a major disease of wheat. Wheat residue can be an important source of inoculum, but the effect of different densities of infected debris on disease severity has not been previously determined. Experiments wer...

  18. Effect of levels of wheat residue on the severity of stagonospora nodorum blotch in winter wheat

    USDA-ARS?s Scientific Manuscript database

    Stagonospora nodorum blotch (SNB), caused by the ascomycete fungus Stagonospora nodorum, is a major disease of wheat. Wheat residue can be an important source of inoculum, but the effect of different densities of infected debris on disease severity has not been previously determined. Experiments wer...

  19. Seep and stream nitrogen dynamics in two adjacent mixed land use watersheds

    USDA-ARS?s Scientific Manuscript database

    In many headwater catchments, stream flow originates from surface seeps and springs. The objective of this study was to determine the influence of seeps on nitrogen (N) dynamics within the stream and at the outlet of two adjacent mixed land use watersheds. Nitrogen concentrations in stream water wer...

  20. Kimberly sugar beet germplasm evaluated for rhizomania and storage rot resistance in Idaho, 2015

    USDA-ARS?s Scientific Manuscript database

    Rhizomania caused by Beet necrotic yellow vein virus (BNYVV) and storage losses are serious sugar beet production problems. To identify sugar beet germplasm lines with resistance to BNYVV and storage rots, 11germplasm lines from the USDA-ARS Kimberly sugar beet program were screened. The lines wer...

  1. Systemic Insecticides Reduce Feeding, Survival and Fecundity of Adult Black Vine Weevils (Coleoptera: Curculionidae) on a Variety of Ornamental Nursery Crops

    USDA-ARS?s Scientific Manuscript database

    A series of bioassays were conducted to test the systemic activity of clothianidin, chlorantraniliprole, dinotefuran, and thiamethoxam against adult black vine weevils (Otiorhynchus sulcatus F.) on Taxus, Heuchera, Astilbe, Sedum, Euonymus, and Rhododendron grown in containers. The insecticides wer...

  2. Analysis of the aflatoxin AFB1 from corn by direct analysis in real time - mass spectrometry (DART-MS)

    USDA-ARS?s Scientific Manuscript database

    Direct analysis in real time (DART) ionization coupled to a high resolution mass spectrometer (MS) was used for screening of aflatoxins from a variety of surfaces and the rapid quantitative analysis of aflatoxins extracted from corn. Sample preparation procedure and instrument parameter settings wer...

  3. 49 CFR Appendix B to Part 220 - Recommended Pronunciation of Numerals

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...”should be used preceding such numbers. Numbers should be pronounced as follows: Number Spoken 0 ZERO. 1 WUN. 2 TOO. 3 THUH-REE-. 4 FO-WER. 5 FI-YIV. 6 SIX. 7 SEVEN. 8 ATE. 9 NINER. (The figure ZERO should... ATENINER NINER. 20.3 TOO ZERO DECIMALTHUH-REE. ...

  4. 49 CFR Appendix B to Part 220 - Recommended Pronunciation of Numerals

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...”should be used preceding such numbers. Numbers should be pronounced as follows: Number Spoken 0 ZERO. 1 WUN. 2 TOO. 3 THUH-REE-. 4 FO-WER. 5 FI-YIV. 6 SIX. 7 SEVEN. 8 ATE. 9 NINER. (The figure ZERO should... ATENINER NINER. 20.3 TOO ZERO DECIMALTHUH-REE. ...

  5. 49 CFR Appendix B to Part 220 - Recommended Pronunciation of Numerals

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...”should be used preceding such numbers. Numbers should be pronounced as follows: Number Spoken 0 ZERO. 1 WUN. 2 TOO. 3 THUH-REE-. 4 FO-WER. 5 FI-YIV. 6 SIX. 7 SEVEN. 8 ATE. 9 NINER. (The figure ZERO should... ATENINER NINER. 20.3 TOO ZERO DECIMALTHUH-REE. ...

  6. Hot moments and hot spots of nutrient losses from a mixed land use watershed

    USDA-ARS?s Scientific Manuscript database

    Non-point nitrogen (N) and phosphorus (P) pollution from agriculture has increasingly received more public attention. However, when, where and how N and P export occurs from a watershed is not completely understood. In this study, nitrate-N, dissolved P and particulate P concentrations and loads wer...

  7. Testing theories of dietary behavior change in youth using the mediating variable model with intervention programs

    USDA-ARS?s Scientific Manuscript database

    Our purpose was to review and critique current experimentally based evidence of theoretical mechanisms of dietary behavior change in youth, and provide recommendations on ways to enhance theory evaluation. Interventions that examined mediators of dietary behavior change in youth (age 5-18 years) wer...

  8. Isolation and identification of bacteria causing blackleg and soft rot of potato

    USDA-ARS?s Scientific Manuscript database

    Both Dickeya and Pectobacterium spp. are important causal agents of blackleg and soft rot of potato. To understand the outbreak of blackleg in the Northeastern U.S. in 2015, samples were collected from symptomatic plants, dormant tubers, and surface water in 2016 and 2017. Diseased plant samples wer...

  9. Life cycle of Cystoisospora felis (Coccidia: Apicomplexa) in cats and mice

    USDA-ARS?s Scientific Manuscript database

    Cystoisospora felis is a ubiquitous apicomplexan protozoon of cats. The endogenous development of C. felis was studied in cats after feeding them infected mice. For this, 5 newborn cats were killed at 24, 48, 72, 96, and 120 h after having been fed mesenteric lymph nodes and spleens of mice that wer...

  10. PERSISTENT ABNORMALITIES IN THE RAT MAMMARY GLAND FOLLOWING GESTATIONAL AND LACTATIONAL EXPOSURE TO 2,3,7,8-TETRACHLORODIBENZO-P-DIOXIN (TCDD)

    EPA Science Inventory

    SUMMARY

    2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) exposure during gestation has revealed reproductive anomalies in rat offspring, including inconclusive reports of stunted mammary development in females (Brown et al., 1998, Lewis et al., 2001). The current studies wer...

  11. Enzyme resistant carbohydrate based micro-scale materials from sugar beet (Beta vulgaris L.) pulp for food and pharmaceutical applications

    USDA-ARS?s Scientific Manuscript database

    Bio-based micro scale materials are increasingly used in functional food and pharmaceutical applications. The present study produced carbohydrate-based micro scale tubular materials from sugar beet (Beta vulgaris L.) pulp (SBP), a by-product of sugar beet processing. The isolated carbohydrates wer...

  12. Optimization of tomato pomace separation using air aspirator system by response surface methodology

    USDA-ARS?s Scientific Manuscript database

    Tomato pomace contains seeds and peels which are rich in protein and fat, and dietary fiber and lycopene, respectively. It is important to develop a suitable method to separate seeds and peel in tomato pomace for achieving value-added utilization of tomato pomace. The objectives of this research wer...

  13. Steroids are required for epidermal cell fate establishment in Arabidopsis roots.

    PubMed

    Kuppusamy, Kavitha T; Chen, Andrew Y; Nemhauser, Jennifer L

    2009-05-12

    The simple structure of Arabidopsis roots provides an excellent model system to study epidermal cell fate specification. Epidermal cells in contact with 2 underlying cortical cells differentiate into hair cells (H cells; trichoblasts), whereas cells that contact only a single cortical cell differentiate into mature hairless cells (N cells; atrichoblasts). This position-dependent patterning, in combination with the constrained orientation of cell divisions, results in hair and nonhair cell files running longitudinally along the root epidermis. Here, we present strong evidence that steroid hormones called brassinosteroids (BRs) are required to maintain position-dependent fate specification in roots. We show that BRs are required for normal expression levels and patterns of WEREWOLF (WER) and GLABRA2 (GL2), master regulators of epidermal patterning. Loss of BR signaling results in loss of hair cells in H positions, likely as a consequence of reduced expression of CAPRICE (CPC), a direct downstream target of WER. Our observations demonstrate that in addition to their well-known role in cell expansion, BRs play an essential role in directing cell fate.

  14. Steroids are required for epidermal cell fate establishment in Arabidopsis roots

    PubMed Central

    Kuppusamy, Kavitha T.; Chen, Andrew Y.; Nemhauser, Jennifer L.

    2009-01-01

    The simple structure of Arabidopsis roots provides an excellent model system to study epidermal cell fate specification. Epidermal cells in contact with 2 underlying cortical cells differentiate into hair cells (H cells; trichoblasts), whereas cells that contact only a single cortical cell differentiate into mature hairless cells (N cells; atrichoblasts). This position-dependent patterning, in combination with the constrained orientation of cell divisions, results in hair and nonhair cell files running longitudinally along the root epidermis. Here, we present strong evidence that steroid hormones called brassinosteroids (BRs) are required to maintain position-dependent fate specification in roots. We show that BRs are required for normal expression levels and patterns of WEREWOLF (WER) and GLABRA2 (GL2), master regulators of epidermal patterning. Loss of BR signaling results in loss of hair cells in H positions, likely as a consequence of reduced expression of CAPRICE (CPC), a direct downstream target of WER. Our observations demonstrate that in addition to their well-known role in cell expansion, BRs play an essential role in directing cell fate. PMID:19416891

  15. PCT Databank: A Tool for Planning, Implementation and Monitoring of Integrated Preventive Chemotherapy for Control of Neglected Tropical Diseases (NTD)

    PubMed Central

    Mikhailov, Alexei; Yajima, Aya; Mbabazi, PS; Gabrielli, Albis F.; Montresor, Antonio; Engels, Dirk

    2017-01-01

    The integration of vertical control programmes of neglected tropical diseases (NTDs) aims at containing operational cost, simplifies the application of the control measures and extends the intervention coverage. The Preventive Chemotherapy and Transmission Control (PCT) Databank was established by the World Health Organization to facilitate the sharing of data among the different partners involved in control activities and collects and compiles historical and current information on disease-specific epidemiological situation, the geographical overlapping of NTDs and the progress of control activities in all the NTD-endemic countries. The summary of country-specific epidemiological maps and the progress of control activities is available online as the online PCT Databank and Country Profiles. The annual progress of preventive chemotherapy (PC) interventions targeting at specific NTDs is also annually reported in the Weekly Epidemiological Record (WER). In this paper, we elucidated the methodology of data collection, compilation and mapping to establish the PCT Databank and presented the key features of the associated three online outputs, i.e. the online PCT Databank, the Country Profile and the WER. PMID:22357399

  16. Validation of a biotic ligand model on site-specific copper toxicity to Daphnia magna in the Yeongsan River, Korea.

    PubMed

    Park, Jinhee; Ra, Jin-Sung; Rho, Hojung; Cho, Jaeweon; Kim, Sang Don

    2018-03-01

    The objective of this study was to determine whether the water effect ratio (WER) or biotic ligand model (BLM) could be applied to efficiently develop water quality criteria (WQC) in Korea. Samples were collected from 12 specific sites along the Yeongsan River (YSR), Korea, including two sewage treatment plants and one estuary lake. A copper toxicity test using Daphnia magna was performed to determine the WER and to compare to the BLM prediction. The results of the WER from YSR samples also indicated significantly different copper toxicities in all sites. The model-based predictions showed that effluent and estuary waters had significantly different properties in regard to their ability to be used to investigate water characteristics and copper toxicity. It was supposed that the slight water characteristics changes, such as pH, DOC, hardness, conductivity, among others, influence copper toxicity, and these variable effects on copper toxicity interacted with the water composition. The 38% prediction was outside of the validation range by a factor of two in all sites, showing a poor predictive ability, especially in STPs and streams adjacent to the estuary, while the measured toxicity was more stable. The samples that ranged from pH 7.3-7.7 generated stable predictions, while other samples, including those with lower and the higher pH values, led to more unstable predictions. The results also showed that the toxicity of Cu in sample waters to D. magna was closely proportional to the amounts of acidity, including the carboxylic and phenolic groups, as well as the DOC concentrations. Consequently, the acceptable prediction of metal toxicity in various water samples needs the site-specific results considering the water characteristics such as pH and DOC properties particularly in STPs and estuary regions. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Effects and modeling of phonetic and acoustic confusions in accented speech.

    PubMed

    Fung, Pascale; Liu, Yi

    2005-11-01

    Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.

  18. Automated thinning increases uniformity of in-row spacing and plant size in romaine lettuce

    USDA-ARS?s Scientific Manuscript database

    Low availability and high cost of farm hand labor make automated thinners a faster and cheaper alternative to hand thinning in lettuce (Lactuca sativa L.). However, the effects of this new technology on uniformity of plant spacing and size as well as crop yield are not proven. Three experiments wer...

  19. SORPTION OF 2,3,7,8-TETRACHLORODIBENZO-P-DIOXIN FROM WATER BY SURFACE SOILS

    EPA Science Inventory

    The sorption of l4C-labeled 2,3,7,8-tetrachlorodibenzo- p-dioxin (TCDD) from water by two uncontaminated surface soils from the Times Beach, MO, area was evalu- ated by using batch shake testing. Sorption isotherm plots for the soil with the lower fraction organic carbon (f,) wer...

  20. Use of SSR markers for DNA fingerprinting and diversity analysis of Pakistani sugarcane (Saccharum spp. hybrids) cultivars

    USDA-ARS?s Scientific Manuscript database

    In recent years SSR markers have been used widely for genetic analysis. The objective of this study was to use an SSR-based marker system to develop the molecular fingerprints and analyze the genetic relationship of sugarcane cultivars grown in Pakistan. Twenty-one highly polymorphic SSR markers wer...

  1. High-throughput Screening of ToxCast™ Phase I Chemicals in a Mouse Embryonic Stem Cell (mESC) Assay Reveals Disruption of Potential Toxicity Pathways

    EPA Science Inventory

    Little information is available regarding the potential for many commercial chemicals to induce developmental toxicity. The mESC Adherent Cell Differentiation and Cytoxicity (ACDC) assay is a high-throughput screen used to close this data gap. Thus, ToxCast™ Phase I chemicals wer...

  2. Lessons learned from research and surveillance directed at highly pathogenic influenza A viruses in wild birds inhabiting North America

    USDA-ARS?s Scientific Manuscript database

    Following detections of highly pathogenic (HP) influenza A viruses (IAVs) in wild birds inhabiting East Asia after the turn of the millennium, the intensity of sampling of wild birds for IAVs increased throughout much of North America and the objectives for many research and surveillance efforts wer...

  3. EAST VERSUS WEST IN THE US: CHEMICAL CHARACTERISTICS OF PM 2.5 DURING THE WINTER OF 1999

    EPA Science Inventory

    The chemical composition of PM2.5 was investigated at four sites (Rubidoux, CA, Phoenix, AZ, Philadelphia, PA, and RTP, NC) in January and February of 1999. Three samplers were used to determine both the overall mass and the chemical composition of the aerosol. Teflon filters wer...

  4. NRC committee on assessment of technologies for improving fuel economy of light-duty vehicles: Meeting with DOT Volpe Center staff - February 27, 2013

    DOT National Transportation Integrated Search

    2013-02-27

    On February 27, 2013 National Research Council's Committee on Fuel Economy of Light-Duty Vehicles, Phase 2 held a meeting at the John A. Volpe National Transportation Systems Center on the Volpe Model and Other CAFE Issues. The meeting objectives wer...

  5. Behavioral responses of two dengue virus vectors, Aedes aegypti and Aedes albopictus (Diptera: Culicidae), to DUET TM and its components

    USDA-ARS?s Scientific Manuscript database

    Ultralow volume (ULV) droplets of DUET TM, prallethrin and sumithrin at a sublethal dose were applied to unfed (non bloodfed) and bloodfed female Aedes aegypti Linn. and Aedes albopictus (Skuse) in a wind tunnel. Control spray droplets only contained inactive ingredients. Individual mosquitoes wer...

  6. Effects of ruminal dosing of Holstein cows with Megasphaera elsdenii on milk fat production, ruminal chemistry, and bacterial strain persistence

    USDA-ARS?s Scientific Manuscript database

    Megasphaera elsdenii (Me) is a lactate-utilizing bacterium whose ruminal abundance has been shown to be greatly elevated during milk fat depression (MFD). To further examine this association, a total of 25 cannulated multiparous Holstein cows were examined in three studies in which strains of Me wer...

  7. Infection and transmission of live recombinant Newcastle disease virus vaccines in Rock Pigeons, European House Sparrows, and Japanese Quail

    USDA-ARS?s Scientific Manuscript database

    In China and Mexico, engineered recombinant Newcastle disease virus (rNDV) strains are used as live vaccines for the control of Newcastle disease and as vectors to express the avian influenza virus hemagglutinin (HA) gene to control avian influenza in poultry. In this study, non-target species wer...

  8. "The Rolling Store" An economical and environmental approach to the prevention of weight gain in African American women.

    USDA-ARS?s Scientific Manuscript database

    The objective was to test the feasibility of the "Rolling Store," an innovative food delivery medium to provide healthy food choices (fruits and vegetables) to prevent weight gain in African American women. A randomized trial design was used in the study. Eligible participants from the community wer...

  9. Maternal pre-gravid body mass index and adiposity influence umbilical cord gene expression at term in AGA infants

    USDA-ARS?s Scientific Manuscript database

    While maternal obesity is associated with unfavorable maternal and fetal outcomes, the influence of maternal obesity on fetal gene expression is less clear. Umbilical cords (UC) from 12 lean (pre-gravid BMI < 25) and 10 overweight/obese (OB, pre-gravid BMI =25) women without gestational diabetes wer...

  10. RELATIONSHIPS BETWEEN PEROXYACETYL NITRATE (PAN), O3, AND NOY AT THE RURAL SOUTHERN OXIDANTS STUDY SITE IN CENTRAL PIEDMONT, NORTH CAROLINA SITE SONIA

    EPA Science Inventory

    Ambient peroxyacetyl nitrate (PAN), ozone (O3), total oxides of nitrogen (NOy) and other pollutant measurements were made at a rural site near Candor, North Carolina during the June-July, 1992 period as part of the EPA sponsored Southeast Oxidants Study (SOS). AN measurements wer...

  11. Screening of abscisic acid insensative (ABI) and low phosphorous efficiency (LPE) mutants from some sequenced lines in the sorghum TILLING population

    USDA-ARS?s Scientific Manuscript database

    Sorghum population for Targeting Induced Local Lesion IN Genome (TILLING) was generated from BTx623 in 2005 and publicly available in 2009. After releasing to the public, this population was intensively screened by morphological observation in the field and a number of mutants with useful traits wer...

  12. The Role of the Sonic Hedgehog Pathway for Prostate Cancer Progression

    DTIC Science & Technology

    2005-02-01

    analyses. Sumin Chi contributed to wers GY, Qi YP, Gysin S, Fernandez-Del Castillo C, Yajnik V. AntoniuB, McMahon M, Warshaw AL Hebrok M: Hedgehog is an...role for p27kiP, gene dosage • 15. Romer JT, Kimura H, Magdaleno S et at: 391(6662), 90-92 (1998). in a mouse model of prostate carcinogenesis

  13. A Bid for Success in Operation Enduring Freedom: Applying Strategic Lessons from Past and Current Afghanistan Campaigns

    DTIC Science & Technology

    2011-02-16

    EXPRESSED HEREIN ARE THOSE OF THE INDIVIDUAL STUDENT AUTHOR AND DO NOT NECESSARILY REPRESENT THE VIEWS OF EITHER THE MARINE COPS COMMAND ANDSTAFF COLLEGE OR...attempted to flee after a purported agreement r with the Afghans and wer~ slaughtered or captured, save one man, over the course of a week. 58 The

  14. LAND USE AS A MITIGATON STRATEGY FOR THE WATER QUALITY IMPACTS OF GLOBAL WARMING: A SCENARIO ANALYSIS ON TWO WATERSHEDS IN THE OHIO RIVER BASIN

    EPA Science Inventory

    This study uses an integrative approach to study the water quality impacts of future global climate and land use changes. Changing land use types was used as a nitigation strategy to reduce the adverse impacts of global climate change on water resources. The climate scenarios wer...

  15. Effects of marketing group and production focus on quality and variability of adipose tissue and bellies sourced from a commercial processing facility

    USDA-ARS?s Scientific Manuscript database

    Objectives were to determine the effects of marketing group on quality and variability of belly and adipose tissue quality traits of pigs sourced from differing production focuses (lean vs. quality). Pigs (N = 8,042) raised in 8 barns representing 2 seasons (cold and hot) were used. Three groups wer...

  16. Inactivation of Toxoplasma gondii on blueberries using low dose irradiation without affecting quality

    USDA-ARS?s Scientific Manuscript database

    Blueberries (10 g) inoculated with T. gondii (5 log oocysts/g) were exposed to an absorbed dose of 0 (control), 0.2, 0.4 or 0.6 kGy gamma radiation at 4°C. After treatment, oocysts were recovered from berries by washing, and excysted sporozoites were enumerated using a plaque assay. Vero cells wer...

  17. Arsenic and Antimony Removal from Drinking Water by Adsorptive Media - U.S. EPA Demonstration Project at South Truckee Meadows General Improvement District (STMGID), NV, Final Performance Evaluation Report

    EPA Science Inventory

    This report documents the activities performed during and the results obtained from the operation of an arsenic and antimony removal technology demonstrated at the South Truckee Meadows General Improvement District (STMGID) in Washoe County, NV. The objectives of the project wer...

  18. Transcription factor ZBED6 mediates IGF2 gene expression by regulating promoter activity and DNA methylation in myoblasts

    USDA-ARS?s Scientific Manuscript database

    Zinc finger, BED-type containing 6 (ZBED6) is an important transcription factor in placental mammals, affecting development, cell proliferation and growth. In this study, we found that the expression of the ZBED6 and IGF2 were up regulated during C2C12 differentiation. The IGF2 expression levels wer...

  19. A scientific note on detection of honey bee viruses in the darkling beetle (Alphitobius diaperinus), an inhabitant in Apis cerana colonies

    USDA-ARS?s Scientific Manuscript database

    The darkling beetles, Alphitobius diaperinus (Panzer), are omnivorous arthropods and pose significant danger to the poultry industry by acting as reservoir and vector of poultry pathogens. Here, the A. diaperinus was first found in the Asian honey bee Apis cerana colonies, and 10 of the 29 hives wer...

  20. PHYSIOLOGICAL AND FOLIAR INJURY RESPONSES OF PRUNUS SEROTINA, FRAXINUS AMERICANA, AND ACER RUBRUM SEEDLINGS TO VARYING SOIL MOISTURE AND OZONE. (R825244)

    EPA Science Inventory

    Sixteen black cherry (Prunus serotina, Ehrh.), 10 white ash (Fraxinus americana, L.) and 10 red maple (Acer rubrum, L.) 1-year old seedlings were planted per plot in 1997 on a former nursery bed within 12 open-top chambers and six open plots. Seedlings wer...

  1. Measurement and Calculation of Developing Turbulent Flow in a U-Bend and Downstream Tangent of Square Cross-Section.

    DTIC Science & Technology

    1981-12-01

    fundamental experimental measurements have been made in a curved duct configura- tion of both industrial and academic significance. 3. Numerical...rne U r ad t -/tu ( u]u whee C hasute~l costt valu gie bseo The spatiaeraiono ielt is termined ur Oz a ru hihRynlsnubr i rpotoalt 32/,wer s h ae fdsspto

  2. Near East/South Asia Report.

    DTIC Science & Technology

    1987-01-20

    sheep pox vac- cines, artificial insemination , soil testing and others. In the meantime, the Soviet scientists introduced the Soviet sunflower into...voltage po- wer transmission line fr- om the Soviet Union to northern regions of the : DRA, the earth satellite ; link station, road-cum-rail...ISRO in making and sup- plying "vital and sensitive" electronic items re- quired by ISRO for remote sensing satellites , augmented satellite

  3. Mapping of leaf rust resistance genes and molecular characterization of the 2NS/2AS translocation in the wheat cultivar Jagger

    USDA-ARS?s Scientific Manuscript database

    Winter wheat cultivar 'Jagger' was recently found to have an alien chromosomal segment 'VPM1' that should carry Lr37, a gene conferring resistance against leaf rust caused by Puccinia triticina, and this cultivar was also reported to have the wheat gene Lr17 against leaf rust. Both Lr17 and Lr37 wer...

  4. Nano-Mechanical Properties of Heat Inactivated Bacillus anthracis and Bacillus thuringiensis Spores

    DTIC Science & Technology

    2008-03-01

    Scanner Laser Mirror Cantilever Sample Probe Tip 16 cereus strain 569, and Bacillus globigii var. niger . Zolock determined that there wer...been used to measure the surface elasticities of a variety of microbial organisms including Pseudomonas putida, Bacillus subtilis, Aspergillus ...66:307-311 (2005). Zhao, Liming, David Schaefer, and Mark R. Marten. “Assessment of Elasticity and Topography of Aspergillus nidulans Spores via

  5. 49 CFR Appendix B to Part 220 - Recommended Pronunciation of Numerals

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...”should be used preceding such numbers. Numbers should be pronounced as follows: Number Spoken 0 ZERO. 1 WUN. 2 TOO. 3 THUH-REE-. 4 FO-WER. 5 FI-YIV. 6 SIX. 7 SEVEN. 8 ATE. 9 NINER. (The figure ZERO should be written as “0” to distinguish it from the letter “O”. The figure ONE should be underlined to...

  6. Intake of honey mesquite (Prosopis glandulosa) leaves by lambs using different levels of activated charcoal

    USDA-ARS?s Scientific Manuscript database

    A 24-day feeding trial was conducted to assess the effect of feeding four levels of activated charcoal (0.0, 0.33, 0.67 and 1.00 g/kg of body weight) on intake of honey mesquite leaves (Prosopis glandulosa Torr.) by 20 wether lambs (36.6 ± 0.6 kg) that were randomly assigned to treatments. Lambs wer...

  7. Characterization of Atmospheric Turbulence Effects Over 149 km Propagation Path using Multi-Wavelength Laser Beacons

    DTIC Science & Technology

    2010-09-01

    received beams (Fig. 2). Narrow bandpass filters were used to dedicate each subaperture to a specific wave from a single beacon. In this paper we...r , (6) where 1 1 ( )Mn n mmI M I − = = ∑ r is the aperture-average intensity for the nth frame. The index S in Eq. (6) denotes averaging over

  8. Retention and readability of radio frequency identification transponders in beef cows over a five-year period

    USDA-ARS?s Scientific Manuscript database

    Objective of this study was to evaluate failure (loss or inability to read) of radio frequency identification (RFID) ear tags in beef cows over a 2 to 5 year period under ranching conditions. One of 5 types of RFID tags was applied in the ear of a total of 4316 cows on 4 separate ranches. Tags wer...

  9. Wer kann, darf und soll bleiben?. Lokale Re-Konfigurationen im Fluchtmigrations- und Integrationsdiskurs

    NASA Astrophysics Data System (ADS)

    Engel, Susen; Deuter, Marie-Sophie

    2018-04-01

    The current discourse on a limitation of immigration is characterized by social and symbolic boundaries, that allow some statements about the (un)desirability of certain immigrants and asylum seekers. The article uses the example of the city of Altena to show how, based on strategic approaches to urban development, refugee migration and its impacts are (re) interpreted at the local level.

  10. JPRS Report, West Europe.

    DTIC Science & Technology

    1988-07-20

    of the rightist parties and a good number of the leftist parties. The effects ofthat earthquake were long lasting: in the second round of the...Prepare for New Season of Submarine Intrusions 32 Effective ASW Weapon Lacking 32 Stockholm Archipelago Security Measures 35 Submarine Observer...Bundestag caucus to the effect that the FRG should assume greater political responsibility in crisis areas outside the area of the JPRS-WER-88-038

  11. Wer kann, darf und soll bleiben? - Lokale Re-Konfigurationen im Fluchtmigrations- und Integrationsdiskurs

    NASA Astrophysics Data System (ADS)

    Engel, Susen; Deuter, Marie-Sophie

    2018-03-01

    The current discourse on a limitation of immigration is characterized by social and symbolic boundaries, that allow some statements about the (un)desirability of certain immigrants and asylum seekers. The article uses the example of the city of Altena to show how, based on strategic approaches to urban development, refugee migration and its impacts are (re) interpreted at the local level.

  12. Flame blowout and pollutant emissions in vitiated combustion of conventional and bio-derived fuels

    NASA Astrophysics Data System (ADS)

    Singh, Bhupinder

    The widening gap between the demand and supply of fossil fuels has catalyzed the exploration of alternative sources of energy. Interest in the power, water extraction and refrigeration (PoWER) cycle, proposed by the University of Florida, as well as the desirability of using biofuels in distributed generation systems, has motivated the exploration of biofuel vitiated combustion. The PoWER cycle is a novel engine cycle concept that utilizes vitiation of the air stream with externally-cooled recirculated exhaust gases at an intermediate pressure in a semi-closed cycle (SCC) loop, lowering the overall temperature of combustion. It has several advantages including fuel flexibility, reduced air flow, lower flame temperature, compactness, high efficiency at full and part load, and low emissions. Since the core engine air stream is vitiated with the externally cooled exhaust gas recirculation (EGR) stream, there is an inherent reduction in the combustion stability for a PoWER engine. The effect of EGR flow and temperature on combustion blowout stability and emissions during vitiated biofuel combustion has been characterized. The vitiated combustion performance of biofuels methyl butanoate, dimethyl ether, and ethanol have been compared with n-heptane, and varying compositions of syngas with methane fuel. In addition, at high levels of EGR a sharp reduction in the flame luminosity has been observed in our experimental tests, indicating the onset of flameless combustion. This drop in luminosity may be a result of inhibition of processes leading to the formation of radiative soot particles. One of the objectives of this study is finding the effect of EGR on soot formation, with the ultimate objective of being able to predict the boundaries of flameless combustion. Detailed chemical kinetic simulations were performed using a constant-pressure continuously stirred tank reactor (CSTR) network model developed using the Cantera combustion code, implemented in C++. Results have been presented showing comparative trends in pollutant emissions generation, flame blowout stability, and combustion efficiency. (Full text of this dissertation may be available via the University of Florida Libraries web site. Please check http://www.uflib.ufl.edu/etd.html)

  13. Afferent Mechanisms of Microwave-Induced Biological Effects.

    DTIC Science & Technology

    1987-08-12

    an effect by treatment with low doses of narcotic antagonist (naloxone or naltrexone ) was used as a criterion for the involvement of endogenous...block number) kEffects of low -level microwave irradiation on neurological function wer-LinesTrgated in the- rat. Results can be summarized in the...effects of microwave exposure and may have important implications in certain occupational situations in which repeated exposure to low -level microwaves is

  14. Archeological Investigations in Cochiti Reservoir, New Mexico. Volume 1. A Survey of Regional Variability.

    DTIC Science & Technology

    1977-06-01

    Abstract (Umit =0 worfs) A total of 325 archeological sites were documented during surveys of Cochiti Reservoir. Detailed summaries of environmental...ted sites within a regional context. during the course of this project. This and continuing re- search problems wer selected which focused upon search...for sites newly documented during survey. Survey of the permanent pool was conducted by Richard C.. Chapman, supervisory archeologist James

  15. Procedural Tests for Anti-G Protective Devices. Volume II. G-Sensitivity Tests

    DTIC Science & Technology

    1979-12-01

    of these valves was used in only one type of aircraft--the ALAR AGV in ...pattern. 3) Total included, inexplicitly in the total for this column along with Failures au.d OTH/MAL’s are Type 6 HOW MALFUNCTION CODES--which...maintenance. Because Type 6 HOW MALFUNCTION CODESI. .were not considered pertinent to this investigation, they wer!. not included in the report. All figures of

  16. Effect of varying ratios of produced water and municipal water on soil characteristics, plant biomass, and secondary metabolites of Artemisia annua and Panicum virgatum

    USDA-ARS?s Scientific Manuscript database

    Coal-bed natural gas production in the U.S. in 2012 was 1,655 billion cubic feet (bcf). A by-product of this production is co-produced water, which is categorized as a waste product by the Environmental Protection Agency. The effects of varying concentrations of coal-bed methane (produced) water wer...

  17. Brassinosteroids control root epidermal cell fate via direct regulation of a MYB-bHLH-WD40 complex by GSK3-like kinases

    PubMed Central

    Cheng, Yinwei; Zhu, Wenjiao; Chen, Yuxiao; Ito, Shinsaku; Asami, Tadao; Wang, Xuelu

    2014-01-01

    In Arabidopsis, root hair and non-hair cell fates are determined by a MYB-bHLH-WD40 transcriptional complex and are regulated by many internal and environmental cues. Brassinosteroids play important roles in regulating root hair specification by unknown mechanisms. Here, we systematically examined root hair phenotypes in brassinosteroid-related mutants, and found that brassinosteroid signaling inhibits root hair formation through GSK3-like kinases or upstream components. We found that with enhanced brassinosteroid signaling, GL2, a cell fate marker for non-hair cells, is ectopically expressed in hair cells, while its expression in non-hair cells is suppressed when brassinosteroid signaling is reduced. Genetic analysis demonstrated that brassinosteroid-regulated root epidermal cell patterning is dependent on the WER-GL3/EGL3-TTG1 transcriptional complex. One of the GSK3-like kinases, BIN2, interacted with and phosphorylated EGL3, and EGL3s mutated at phosphorylation sites were retained in hair cell nuclei. BIN2 phosphorylated TTG1 to inhibit the activity of the WER-GL3/EGL3-TTG1 complex. Thus, our study provides insights into the mechanism of brassinosteroid regulation of root hair patterning. DOI: http://dx.doi.org/10.7554/eLife.02525.001 PMID:24771765

  18. Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheleva, N.; Ivanov, P.; Todorova, G.

    This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less

  19. Comparative Test of the Effectiveness of Large Bombs against Reinforced Concrete Structures (Anglo-American Bomb Tests-Project RUBY).

    DTIC Science & Technology

    1946-10-31

    be expected to perforate up to 15’-l0" of reinforced concreto at this striking velocity. (3) The rocket assisted 4500-lb. Disney bomb, with a striking...Aaedttted wer the ia&AlM srfase of th. bae" at tiehes- It~, is possibi,. thaltA We as OW Wae rleaketeI4" xibmaded ArM the avl.s )at Roe blsm autos

  20. JPRS Report, West Europe.

    DTIC Science & Technology

    1988-02-09

    Manufacture Missiles, Cannons [TIEMPO, 16 Nov 87] 81 Details on Chronic Trade Deficit With USSR [ MERCADO , 13 Nov 87] 83 JPRS-WER-88-006 9 February... municipality of Vienna or the Federal Gov- ernment do not care at all that jobs of Austrian construc- tion workers are unjustly being taken away from...coming behind PASOK and that this had been proven to be incorrect. Specifically, an ICAP poll on the eve of the 1982 municipal elections had Mr Beis

  1. Researches on Preliminary Chemical Reactions in Spark-Ignition Engines

    DTIC Science & Technology

    1943-06-01

    correct ratio for air, mth’ was determined analyt- ically from the proportion of alcohols , olefines, 16 NACA Technical Memorandum No, 1049 ’\\ aromatics...umption is therefore permissible. The mix- ture ratio for a fuel will then depend on its aromatic and alcohol content as fdund by chemical analysis. The...composition, would behave differently. To obtain informa- tion on this point, tests wer~ made with pure iso-octane, ~thyl alcohol , and benzol, all

  2. JPRS Report West Europe

    DTIC Science & Technology

    1988-10-28

    analysis effort can start. This analysis work will be done in the spring of 1989, and the Defense Commission is to be ready with its report at the end...production processes, electronic equipment and weapons promptly. JPRS-WER-88-061 28 October 1988 21 MILITARY The project JF 90 will eclipse all of the...by the numerical superiority of a potential enemy, his attack-oriented military doc- trine and equipment , and his continually improving all-weather

  3. Resource Utilization in Ambulatory Primary Care at Darnall Army Community Hospital, Fort Hood, Texas

    DTIC Science & Technology

    1991-07-23

    population: gastroenteritis (GI), otitis media (OM), and upper respiratory infection (URI). These resources will be thc- dependent variables of the study...1991. The children ranged in age from 3 to 5 years old and were diagnosed with otitis media , upper respiratory tract infection, and gastroenteritis. A...x-rays wer-e rarely ordered to confirm the diagnoses of otitis media , gastroeinteritis. and upper respiratory tract infection. Only eight, laboratory

  4. West Europe Report

    DTIC Science & Technology

    1987-03-12

    system with a 12cra high - pressure gun ought to be procured as quickly as possible. This need can be met in one of the following ways: 1 . By...on the other hand, a new tank is not procured, a light self -propelled antitank gun with a 12cm high -pressure gun for the mechanized units should...126066 JPRS-WER-87-018 12 MARCH 1987 West Europe Report FBIS FOREIGN BROADCAST INFORMATION SERVICE REPRODUCED BY U S . DEPARTMENT OF COMMERCE

  5. Chemical Reactions in Turbulent Mixing Flows.

    DTIC Science & Technology

    1986-04-10

    fluctuation of the " flame " length of such reactingjeatreent wer copoit sequencd shout tnhawate facility, documented previously’,’ 1 , using laser jets. A...motion film of such a chemically reacting turbulent jet visualized using this technique, is shown in figure 1. In each I. Flame length fluctuations of...acid-base reaction to determine length and time to allow a simultaneous view of mixing in the two scales for the flame length fluctuations of thin

  6. East Europe Report, Economic and Industrial Affairs, No. 2408.

    DTIC Science & Technology

    1983-06-08

    these power plants ; —[the crisis resulting from] insufficient geological surveys (too few drill- ings, insufficient reserves’ analysis ), serious...which swallow the sardines of the local factories and plants . Regional banks pull their shutters down the moment a few ounces of gold raise or lower...ac- cordance with the multilateral program of R & D cooperation entitled "Diagnos- tics of WER Nuclear Power Plants ’ Operating State," a unique

  7. Effects of changing milk replacer feedings from twice to once daily on Holstein calf innate immune responses before and after weaning

    USDA-ARS?s Scientific Manuscript database

    The objectives of this study were to determine the effects of switching Holstein calves to once-a-day feeding during the 4th week of life (24 ± 2.3 d of age; once-fed n = 22; twice-fed n = 22) on innate immune responses, and also evaluate whether there were any carry-over effects when the calves wer...

  8. Lightweight Towed Howitzer Demonstrator. Phase 1 and Partial Phase 2. Volume D1. Part 2. Structural Analysis (Less Cradle and System).

    DTIC Science & Technology

    1987-04-01

    toleranze. The anisotropic Lai rr-erial was analyzed in a ply-by-ply fashion. R’ ecommendati ons wer e made for desiqn chanoes wh ic h Co0Ul1.d red’jc...weight. reduce? strains and increase stiffness. 1. The atuo e process was repeated with data modifications to ref!E:t the desired design chanoes

  9. JPRS Report, West Europe.

    DTIC Science & Technology

    1988-01-20

    Details on ’Jupiter-87’ Air Maneuvers Provided [ Manuel Catarino; O DIA, 3 Nov 87] 25 Military Officer Laments Lack of Air Force Resources [DIARIO DE...del Melo said that "There is no desire" on the part of the Portuguese authorities to accept the fighter squadrons. JPRS-WER-88-002 20 January 1988...country. However, Eurico del Melo is known for his "realism" and his caution. Would he voice such an absolute judg- ment even before having received

  10. Development of Site-Specific Water Quality Criteria for the Arpa Harbor Wastewater Treatment Plant in Tipalao Bay, Guam

    DTIC Science & Technology

    2016-07-01

    multiplied by the WER, also expressed as DM, which is multiplied by a mixing zone; the product of these three values then are divided by the chemical...involves corrections, additions , and deletions to the national toxicity data set, rendering it more representative of species occurring at a specific...scientific evidence to indicate clear adverse linkages between aluminum and adverse effects to marine organisms. In addition , USEPA has

  11. Enhanced Peptide of Prostate Cancer Using Targeted Adenoviral Vectors

    DTIC Science & Technology

    2005-06-01

    receptor subtype 2 has been constructed and evaluated in-human prostate cancer cells with regard to binding: of 64Cu - octreotide. In vivo experiments...of 64CU -octreotide.. The mice wer.e. sacrificed 1. h after peptide injection for biodistribution analysis. In vivo biodistribution studies showed...similar uptake of 64Cu - octreotide in both DU-145 and PC-3 tumors after infection with-AdSSTR2. (2.5. and 2.7% ID/g, respectively). This uptake was

  12. JPRS Report West Europe.

    DTIC Science & Technology

    1988-09-20

    accommodate shipwreck victims when necessary. The combat system of the new patrol vessels was built by Selenia Elsag and is based on a Pegasus "optronic...Selenia Elsag at the firm’s offices. It is also planned to take the crew (divided into groups of equal size) on board during the 6 months before the...Armament: two 30-mm Bredas; two 7.62 mm Elec- tronics: one Selenia Elsag Pegasus fire-control center; two GEM navigational radars JPRS-WER-88-052 20

  13. Director, Operational Test and Evaluation Report FY�

    DTIC Science & Technology

    1989-01-19

    FIRST QUARTER, FY89 ........... VII-I GLOSSARY OF ACRONYMS ....................................... G -1 0 0 0 * vii "* PART I DOT&E ACTIVITY SUMMARY AAND...ouiteriawere egoiandtion w oit e form sisted of the approved air threat against ahecriteria wer expanded to pr vide ora g fruntsiltdnce iog- more...COMMUNICATIONS TERMINAL-- ~~AN/TRC-179(V)1 ,. S(FORCE-MOBILE) MANPACK P/O AN/GRC-215 GENERATOR SET PU-794/ G (MOD) * COMMUNICATIONS TERMINAL AN/GRC-215 AN

  14. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  15. Terminal Forecast Reference Notebook (TFRN) for George AFB, California.

    DTIC Science & Technology

    1980-09-04

    Sierrai Ne-vadal ranEgo to thle northi. Seventy mi les west-northwest Is Tehachapi Pass (ci cv. 3,800’ MSI.) * lie gnteWay Loii t lo I.wer San Joia ~qin... designed to provide aIgencieos soift icient time. to accomplish protect ive mteasures. Under Phasie 1, non-destructive weather condit ions (scich as...orge AFB. Action Taken to Resolve Problem: a. Trljectory htlletIn Information Is being plott.I for po;sible storm signatures as des - cribed on page 36

  16. A National Level Engagement Strategy: A Framework for Action

    DTIC Science & Technology

    2012-05-15

    engagement framework that integrates all instruments of natio nal po wer to focus iocrea~ ing l y limitetl resources LO meet the most s ignificant national...ami rc!o.pOn:-.ibilitie:-, to imph::mcnl and execute the nu!ional engagement

  17. Sources & Transport Mechanisms of Sediments in the Oceans.

    DTIC Science & Technology

    1982-03-31

    estuaries 03si e m arh ket. The. peristaltic action, dugt is. pro ~gre- and oceans. The concentration of doe suspended sample was testig were obtained by...waters: Clays Clay Minerals, Seventh National Con - weer: Progress in water technology. v. 7, p. 207-216. ference, p. 1-79. Guess, R. J. (1977), Clay...the (10) ZAbawa, C. Science (Wahigton, D.C) 1978,202,49.4 dlyt Of the GMo wer 45 jum wit a meal portion (11) G b.I J.; Kenwar, L Environ. Sci. Technol

  18. Proceedings of the Annual Symposium on Frequency Control (34th), Held 28-30 May 1980 Philadelphia, Pennsylvania

    DTIC Science & Technology

    1980-01-01

    change between 2 x 10- 2 torr PRESSURE (to-r) and I atmosphere was measured (in a non -temper- ature controlled environment) to be less than FIGURE 8...microstrip, how- non -resonant and non -propagating. Losses due to ever, are less desirable. To control radiation finite substrate thickness werE determined .y...Temperature dependence of the stabilized oscillator. 254 Proc. 34th Ann. Freq. Control Symposium, USAERADCOM, Ft. Monmouth. NJ 07703. May 1980 NON -LINEAR

  19. Proceedings of the Annual Conference of the Military Testing Association (19th), 17-21 October 1977, San Antonio, Texas

    DTIC Science & Technology

    1979-11-01

    to-?ir kills, especially since we will probably be atle to field only a relatively small number of fighter pilots in future wars. Fighter pilots have...Education and Training, Information, And Legal, too many colonel positlons were allocated. Overall, many jobs wer found to be undergraded, especially at the...t preparing the final eppmntice progress/status report, 4e avy C :-vs educacion specialist will regard previous work experience as distri±xtnd

  20. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  1. Cost Effectiveness Trade-Offs in Software Support Environment Standardization.

    DTIC Science & Technology

    1986-09-30

    IIIIIEEEIIIIIE MiII I U..2 2 ma MICROCOPY RESOLUTION TEST CHART 911C FILE C y, o FINAL REPORT - September 30, 1986 G- TECHNION INTERNATIONAL, INC. Cost...Summary description of econometric model B-I C. Causal chain used as basis for model C-I D. Excerpts from [Wer185) D-1 LIST OF FIGURES S-1 USAF MCCR...Productivity cost drivers D-4 LIST OF TASL3$ I-1 Summary of Tangible Benefits in Econometric Equations 1-9 1-2 Summary of Tangible Costs in Econometric

  2. [Lethal anaphylactic shock model induced by human mixed serum in guinea pigs].

    PubMed

    Ren, Guang-Mu; Bai, Ji-Wei; Gao, Cai-Rong

    2005-08-01

    To establish an anaphylactic shock model induced by human mixed serum in guinea pigs. Eighteen guinea pigs were divided into two groups: sensitized and control, The sensitized group were immunized intracutaneously with human mixed serum and then induced by endocardiac injection after 3 weeks. Symptoms of anaphylactic shock appeared in the sensitized group. The level of serum IgE were increased in the sensitized group significantly. An animal model of anaphylactic shock wer established successfully. It provide a tool for both forensic study and anaphylactic shock therapy.

  3. Polychronicity and its Impact on Leader-Member Exchange and Outcome Behaviors

    DTIC Science & Technology

    2008-05-01

    feet asense ofpride In r-C .C r C. dolng my job. Pagje 43 112 I be" fto Moch r r r r 711etwd w M 0 t r r IrI goS.l $ad "we~ at Kf joIsq~*s ,C VA"IL...Air Forma Of of th -e *" thdrAir Frce Adiemy *VON"b alwva It I hadnot ale"dput r- Ir4-4 so amc of MYOWl in the Mr Fore Academy, I meight omWer eo" to WW

  4. Basal Ganglia Dopamine-gamma-Aminobutyric Acid-Acetylcholine Interaction in Organophosphate-Induced Neurotoxicity. Appendices

    DTIC Science & Technology

    1982-12-01

    Animals were maintained ad llbltua on standard laboratory chow and tap water and were housed in a room with automatic 12 hour light and dark cycles and...supernatant fluid was centrl- fuged at 20,000 x g for 20 min to obtain a crude mitochondria ! pellet. The crude mitochondrlal pellet was resuspended...chow and tap water and wer- housed in a room with automatic 12 hour light and dark cycles and temperature set at 25.5 ± 1.8°C. DUsopropylfli’oro

  5. Cost-Benefit Analysis of Possible U.S. Adherence to Two International Conventions on Liability and Compensation for Oil Pollution Damages.

    DTIC Science & Technology

    1983-06-30

    advantages and disadvantages of adher- ing to the conventions. This report presents the results of the cost- benefit analysis. The primary question...AD-A133 iii COST- BENEFIT ANALYSIS OF POSSIBLE US ADHERENCE TO TWO 114 INTERNATIONAL CONVE..(U) TEMPLE BARKER AND SLOANE INC LEXINGTON MA 30 JUN 83...Accession No. 3. Recipient’s Catalog No. CC-WER-83-1 4. Title and Subti e Cost- Benefit Analysis of possible U.S. S. Report Date Adherence to two

  6. U. S. Naval Forces, Vietnam Monthly Historical Summary for April 1969

    DTIC Science & Technology

    1969-05-01

    Rach Gia-Long Uylan and Rach Sol canals made no contact with enem forces during April. Routine patrols by ?DR’s continued in the shallm coastal areas ot...Expending 11 rounds of mortar fire the cutter covered the target area well awd observed two enem killed and the other four ’wer estimated to have been either...definitely b, en in the area as enem mein . killod two ,-rines and wo’unded another twenty-six. Socurit- and reconnaissance missions weto performed by the

  7. Identification or Development of Chemical Analysis Methods for Plants and Animal Tissues

    DTIC Science & Technology

    1981-01-01

    Report No. DRXTH-TE-CR-80086 [E EMWSTEEACISITE - IDENTIFICATION OR DEVELOPMENT OF CHEMICAL ANALYSISI METHODS FOR PLANTS AND ANIMAL TISSUES D l...86i 4TITLE (ansdo"t) TYPE or" P 4. Iih~iti.)Final epwt. )9 A4,417 ~, Identlifiation or Development of Chemical I-g 9 O d 18 Analysis Methods for Plants...n TN oT sacmpihdo in. wEr DContinng er adetcsr aid Iuent bybopkmertda lo aeo 1.5th 1/s deeipta24ndectr Tritrotasolsoepae d andT Boloia Matrc,, 1473

  8. Studies on Toxoplasmosis in Animals in Association with Man in Egypt.

    DTIC Science & Technology

    1978-07-01

    A0 A067 320 AIN SHAMS ISUV CAIRO (EGYPT) FACULTY OF ICOICINC F/$ 6~ 13STUDIES ON TOXOPLASMOSIS IN ANIMALS IN ASSOCIATION W ITH MAN IN —— ETC ( u At...128). - - Sero].~ogioal results for sara of cows( ~ sara showed negative pe.c$**s reactions -for toxoplasmosis and 11 sara were positive for Toxoplasma...1/32 and two sara at a titr. of 1/64. - Concerning cows ’ sara from Rashid ~ 13 ape cimene out .ot 1 procured wer e eeronegative for toxoplasmosis

  9. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  10. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  11. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  13. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  14. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  15. Implementation of the Intelligent Voice System for Kazakh

    NASA Astrophysics Data System (ADS)

    Yessenbayev, Zh; Saparkhojayev, N.; Tibeyev, T.

    2014-04-01

    Modern speech technologies are highly advanced and widely used in day-to-day applications. However, this is mostly concerned with the languages of well-developed countries such as English, German, Japan, Russian, etc. As for Kazakh, the situation is less prominent and research in this field is only starting to evolve. In this research and application-oriented project, we introduce an intelligent voice system for the fast deployment of call-centers and information desks supporting Kazakh speech. The demand on such a system is obvious if the country's large size and small population is considered. The landline and cell phones become the only means of communication for the distant villages and suburbs. The system features Kazakh speech recognition and synthesis modules as well as a web-GUI for efficient dialog management. For speech recognition we use CMU Sphinx engine and for speech synthesis- MaryTTS. The web-GUI is implemented in Java enabling operators to quickly create and manage the dialogs in user-friendly graphical environment. The call routines are handled by Asterisk PBX and JBoss Application Server. The system supports such technologies and protocols as VoIP, VoiceXML, FastAGI, Java SpeechAPI and J2EE. For the speech recognition experiments we compiled and used the first Kazakh speech corpus with the utterances from 169 native speakers. The performance of the speech recognizer is 4.1% WER on isolated word recognition and 6.9% WER on clean continuous speech recognition tasks. The speech synthesis experiments include the training of male and female voices.

  16. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  17. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  18. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  19. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  20. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  1. [Experience with the Hind Foot Relaxation Boot].

    PubMed

    Zwipp, Hans; Borrmann, Michael; Walter, Eberhard

    2017-06-01

    The goal of this paper is to report our experience with hindfoot fractures using our specially developed boot, with a follow-up of 557 cases. This boot works like the well-known Allgöwer-Röck ortheses (ARO), but is a hybrid between a boot and an orthesis. It allows full weightbearing without using crutches and completely protects an acutely operated hind foot fracture, hind foot arthrodesis or a hind foot fracture which is suitable for conservative treatment. In its first generation, this boot was custom made and used in 408 cases, from March 1999 to February 2011. This study was performed exclusively at the Department of Traumatology and Reconstructive Surgery in the University Centre of Orthopaedics and Traumatology, since 2013 at the Carl Gustav Carus University Hospital of the Technical University of Dresden (since 2013). The new improved second generation of this boot has been used in 149 patients between March 2011 and February 2016. This model is lighter and safer, due to an aluminium U-profile which is produced in one piece and interposed and fixed with 4 screws into the sole of the boot. The ground reaction forces are transported to the tibial head by this U-profile, to which the dorsal acryl shell for the calf of the Röck system is fixed with 2 screws on both sides, including the free ventral patellar shell. This is closed individually by two quick fastener buckles. This modular system of the second generation boot is now available for all patients in Dresden. These new boots have replaced the use of a wheel-chair for 3 months and are especially useful in bilateral calcaneus fractures - which occur in about 18% of all cases. In these new boots, the whole sole of the boot is in contact with the ground, rather than a surface of 9 × 3 cm as in the Allgöwer-Röck ortheses. As a result, these boots are considered to be superior to the ARO because standing and walking without crutches is much more easier - even for elderly patients. In contrast to the Allgöwer-Röck ortheses, in which no ground reaction forces are transmitted to the free hanging foot, some ground contact in the boot is provided through the metatarsal heads and toes, as the foot is positioned at about 20 degrees of equinus. Due to these conditions, osteopenia of the foot skeleton and deficits of coordination are less often observed clinically after 3 months than has been the case with the ARO. With the Allgöwer-Röck orthesis for only one injured hind foot, the leg length must be corrected by up to 8 to 10 cm for the contralateral shoe sole. On the contrary, this new boot facilitates free walking. In our series of a total number of 557 boots in 401 patients,156 patients wore two boots due to bilateral hindfoot fractures. The patients' mean age was 39.9 years (14 to 80 years), including 83.9% males. With application of low molecular weight heparin and lower leg compression hoses (primarily of the CCL1 type), there was no dislocation of the hindfoot fractures, no wound complication due to pressure in the boot and no deep vein thrombosis leg compression. The main indication for prescribing the boot was 252 bilateral calcaneal fractures. Whereas in the first generation fatigue fracture of the aluminium U-profile was found in 4 of 408 (0.9%) cases. There was only one such case in the second generation (n = 149). The boot was worn during the with the healing time of the fractures for a mean of 12.3 weeks in both groups. Georg Thieme Verlag KG Stuttgart · New York.

  2. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  3. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  4. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  5. World energy resources

    NASA Astrophysics Data System (ADS)

    Clerici, A.; Alimonti, G.

    2015-08-01

    As energy is the main "fuel" for social and economic development and since energy-related activities have significant environmental impacts, it is important for decision-makers to have access to reliable and accurate data in an user-friendly format. The World Energy Council (WEC) has for decades been a pioneer in the field of energy resources and every three years publishes its flagship report Survey of Energy Resources. A commented analysis in the light of latest data summarized in such a report, World Energy Resources (WER) 2013, is presented together with the evolution of the world energy resources over the last twenty years.

  6. Know Your Enemy, Issue Number 8

    DTIC Science & Technology

    1966-11-18

    8217the encircazb t. *bocaem6 ’t2-wer,,wun:de’rrate-d ’-the -enemy and only -ma~ de searches* as a matter" of f orm.. On. approach ing: the main objective...have. been able to de ~stroy them1hqp.1f.,.,- vWq,;D the.. ones-*Wo" ’ 1 life an 4 .,a~~1~ ..rd!oss i epn ~L A * . * . * .’~. . . .7A 1 - 4,q STUDY ON...preparation of the-supply system. Important tonnage of food has -to be’moved fr-om. rear Depots to the front line.. In addition to their Transport ation

  7. Quantitative Assessment of HIV Replication and Variation in Vivo: Relevance to Disease Pathogenesis and Response to Therapy

    DTIC Science & Technology

    1994-07-20

    antiretroira therapy and vaccine research efforts. We evaluated quatitative cop’Itive polymeras cbai reat=o (WC-P") aW branched DNA ( bDNA ) spgna amplification...therapy. Bighty-aut percent of patients had bDNA values above the, 10V RNA EqAul assy Sensitivity cutoff. bONA values grea significantly correlated with...deserminations by bDNAman QC-PCRt assays wer tuantitatiel similar In the range, o110’wt 10’ RNA moleculeshnl (loS bDNA - 0.93 + 0.80 eg QC-?CR. R’ w 0.81. p

  8. Musik einpacken

    NASA Astrophysics Data System (ADS)

    Loos, Andreas

    Wer schon einmal Nudeln selbst gemacht hat, der weiß: Frische Pasta kann ganz schön pappen. Das ist ein Problem für die Nudelindustrie, denn es ist nicht leicht, mit unregelmäßigen und klebrigen Nudel-Klumpen 500-Gramm-Beutel genau zu füllen. Einige Hersteller verwenden daher "Teilmengenwaagen". Die besitzen bis zu hundert kleineWaagschalen, die über ein Förderband mit jeweils ungefähr 50 Gramm Nudel-Klumpen befüllt werden. Dann kommt Mathematik ins Spiel: Ein Computer wählt die zehn Waagschalen aus, deren Inhalt zusammen die 500 Gramm genau erreicht, und leert sie in einen Beutel aus.

  9. Calculation of water equivalent ratio of several dosimetric materials in proton therapy using FLUKA code and SRIM program.

    PubMed

    Akbari, Mahmoud Reza; Yousefnia, Hassan; Mirrezaei, Ehsan

    2014-08-01

    Water equivalent ratio (WER) was calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and SRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. The biggest difference between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference for PMMA and ≤1.08% difference for Al, respectively) with the experimental data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  11. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  12. Ageing of a phosphate ceramic used to immobilize chloride-contaminated actinide waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalfe, Brian; Donald, Ian W.; Fong, Shirley K.

    2009-03-31

    At AWE, we have developed a process for the immobilization of ILW waste containing a significant quantity of chloride with Ca 3(PO 4) 2 as the host material. Waste ions are incorporated into two phosphate-based phases, chlorapatite [Ca 5(PO 4) 3Cl] and spodiosite [Ca 2(PO 4)Cl]. Non-active trials performed at AWE with Sm as the actinide surrogate demonstrated the durability of these phases in aqueous solution. Trials of the process, in which actinide-doped materials were used, wer performed at PNNL where the waste form was found to be resistant to aqueous leaching. Initial leach trials conducted on 239Pu / 241Ammore » loaded ceramic at 40°C/28 days gave normalized mass losses of 1.2 x 10 -5 g.m -2 and 2.7 x 10 -3 g.m -2 for Pu and Cl respectively. In order to assess the response of the phases to radiation-induced damage, accelerated ageing trials were performed on samples in which the 239Pu was replaced with 238Pu. No changes to the crystalline structure of the waste were detected in the XRD patterns after the samples had experienced an α radiation dose of 4 x 10 18 g -1. Leach trials showed that there was an increase in the P and Ca release rates but no change in the Pu release rate.« less

  13. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  14. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  15. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  16. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  17. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  18. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  19. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  20. EFFECTS OF CHRONIC IRRADIATION ON EINKORN WHEAT (in Japanese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumura, S.; Fujii, T.

    1963-01-01

    An experiment was carried out with seedlings of einkorn wheat ( Triticum monococcum flavescens), planted at the stage of 3 to 4 foilage leaves on Feb 23rd at several distances from the source and irradiated with varying dose rates and dosages. The plants wer irradiated during their growth and harvested on June 21st. Regular irradiation followed the routine of the gamma field; that is it was applied 20 hr per day starting from April 1st, but before that time, irradiation was occasionally given for mearsurement of dosage or dose rates and checks of the apparatus. Therefore, total irradiation time frommore » planting to harvest amounted to 1843 hours. Total dosages were calculated as 369 r minimum and 4608 r maximum, intensity being 4 to 50 r/20 hr. Survival rate clearly decreased at the highest dosage, while there was no marked difference between the other five irradiated lots and the nonirradiated one. On the other hand, seed fertility was nonlinearly (gradually at higher dosages) decreased with increasing dosage and reached to about one-half of that of the control lot at the highest dosage. For comparison, seedlings of the same kind were irradiated by 0.5 and 1 kr acute gamma rays (1 kr/hr) 16 days after sowing. Survival rate and fertility markedly decreased at 1-kr irradiation. The results were not easily comparable with those of chronic irradiation because acute irradiation was done only in the early seedling stage. Acute 1-kr irradiation showed severe effects and a similar effect was seen at the highest dosage of chronic irradiation (4607 r). Thus, the plants must be more tolerant to higher dosage when radiation intensity is low. (P.C.H.)« less

  1. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  2. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  3. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.

  4. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  5. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  6. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  7. Comparison of Meropenem MICs and Susceptibilities for Carbapenemase-Producing Klebsiella pneumoniae Isolates by Various Testing Methods▿

    PubMed Central

    Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.

    2010-01-01

    We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603

  8. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  9. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  10. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  11. Finite Difference Calculation of an Inviscid Transonic Flow over Oscillating Airfoils,

    DTIC Science & Technology

    1980-10-01

    8217 processing results based on: W , withn c’rapns, etc, were preparec. 1Ihese programs wer,< wr it tvfl iP. theO odk ! for a FACOM2 _075 :cmptr wit- array...and numbers of mesh images used in the calculation in each are shown collectively in Table I. The numbers of the figures showing the results of the...pressure .. ... a 6 distributions - odk - A. -0.0 0. 1.0 EXERIMfNT iM O. 745 ____P_____%d ____ I TIJOEM0I1 OZ-8*KA7M OT "NSITI0N STRIP &6=0.5" AN II"UX

  12. THE REACTION OF METHYLCHLOROSILANE AND SIC14 WITH N,N’-BIS-)TRIMETHSILYL-( ETHYLENEDIAMINE

    DTIC Science & Technology

    B Y T REACTION OF (CH3)3SiNH(CH2)2NHSi(CH3)3 or its mono- or di-N-Lithium derivatives, respectively, and the three methylchlorosilanes and SiCl4 , the...new chain compounds ((CH3)3Si) 2N(CH2)2NHSi(CH3)3 a d ((CH )3 i 2N(CH2)2N(Si(CH3) 3)2 n 3 five-membered ring compounds wer prep re (CH ) SiNH(CH <)2NHS I CH3)3 reac s with CH3SiCl3 and SiCl4 in a complex reaction. (Author)

  13. Little-known aspect of Theodor Billroth's work: his contribution to musical theory.

    PubMed

    McLaren, N; Thorbeck, R V

    1997-06-01

    Theodor Billroth's contribution to musical theory is discussed and evaluated. Billroth was a close friend of the composer Johannes Brahms and was himself extremely musical. At his death he left the manuscript of a book on musical theory, Wer ist-Musikalisch?, which had gone through four editions by 1912. In it he attempted to answer important questions on the nature of sound perception, the importance of rhythm as a fundamental element in music, the relation of pitch, tone, and volume, and the ways in which to account for the affective power of music. This article outlines the main concepts, contributions, and opinions offered by Billroth.

  14. Hybrid Fluorosilicones for Aircraft Fuel Tank Sealants. Part 4. Synthesis of Fluorocarbon and Fluorocarbon Ether Hybrid Fluorosilicone Polymers.

    DTIC Science & Technology

    1974-05-01

    onine n oviro id i ncesay (Coidntiu d onac evesesie fluoroether SECURITY CLASIICAIO OFstuei THISer wAGE sythsie,and uRred) research~~~~~~~~ amle wer...i I. ." r;. . - 7 7 7 SECTION Table of Contents (Continued)Page b. Preparation of CHa-CH(CFa)aO(CFa),O( CFt )CH-CH* ............ 27 c. Preparation of...Photolysis of I(CFg)aO(CPs)nO(CFm)aI/Hg/ CFt .................. 32 6. Reaction of I(CFR)&O(CF,).O(CFa)s1 with Hg ..................... 33 a. 280 0C

  15. A Systems Evaluation of the Environmental Impact of the Aubrey Reservoir Project on Elm Fork of the Trinity River in North Texas.

    DTIC Science & Technology

    1972-06-01

    JUN 72 L C FITZPATRICK , R L ABSHI RE. L G KNOX D AC W 3-72-C-X052 4 mhhho hhEEE I lffllffllfNOlf NElf mohEEEEEshhhEE EhmhhEEEmhEEEE mhEmhEEEEEmhhI...C-0052, on 30 June 1972. I f’v IabnC Wer’Ye td so; itsj 81 3 3 063 - I _rj_,CLASSIFICATION OF THIS PAGE (When Dole Kntored) REPORT DOCUMENTATION PAGE...Engineers June 1972 Engineering Division, Plng Br, SWFED-P I . NUMBER OFPAGES P. 0. Box 17300, Fort Worth, Texas 76102 339 14. MONITORING AGENCY NAME

  16. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  17. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  18. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  19. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  20. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  1. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  2. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  3. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  4. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  5. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  6. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  7. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  8. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  9. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    PubMed

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System.

    PubMed

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-05-04

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  11. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  12. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  13. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  14. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  15. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  16. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  17. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  18. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  19. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  20. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  1. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  2. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  3. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  4. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  5. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  6. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  7. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  8. Improving the quality of cognitive screening assessments: ACEmobile, an iPad-based version of the Addenbrooke's Cognitive Examination-III.

    PubMed

    Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F

    2018-01-01

    Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n  = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.

  9. Documentation of study medication dispensing in a prospective large randomized clinical trial: experiences from the ARISTOTLE Trial.

    PubMed

    Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B

    2013-09-01

    In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.

  10. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  11. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  12. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  13. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  14. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    PubMed Central

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-01-01

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707

  15. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  16. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  17. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  18. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  19. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  20. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  1. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  2. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  3. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  5. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  6. General Aviation Activity and Avionics Survey 1984

    DTIC Science & Technology

    1985-10-01

    1-r- 0sn r-C) 1-n4 A rW IniC lol o oo i.VA Cd )’ a .3 -- - I, CdA xd j Cd ’Am. " ’ Ai I-W wU g t 8 weR at lz at w betl wg at i w~ a w~ at ix 8 OM I0at...S ’ TABLE D-1. SDR AIRCRAFT GROUP NAME - FAA MANUFACTURER/MODEL CODES (CONTINUED) SDR FAA SDR FAA SDR FAA PIPER 600 106001... PIPER PAlS 101828 PIPER PA31T 103128 PIPER 600 106010 PIPER PAl8 101832 PIPER PA32 103206 PIPER 600 106012 PIPER PAlS 101834 PIPER PA32 103207 PIPER 600

  7. Sternbilder und ihre Mythen

    NASA Astrophysics Data System (ADS)

    Fasching, Gerhard

    Aus den Besprechungen: "... Wem bei seinen philosophischen Höhenflügen allerdings die einfachsten Grundlagen fehlen, wer sich am Himmel ähnlich zurechtfindet wie ein Amazonasindianer im Großstadtverkehr, dem seien die Sternbilder und ihre Mythen ans Herz gelegt, die der Wiener Universitätsprofessor Gerhard Fasching zusammengestellt hat... Da werden Wegweiser-Sternkarten für das ganze Jahr gezeigt, die auch einem astronomischen Ignoranten die nächtliche Orientierung ermöglichen. Daneben werden die Sternsagen des Ovid opulent ausgebreitet, das überlieferte Wissen aus verschiedenen Kulturkreisen zitiert und wissenschaftliche Erklärungsmodelle zusammengetragen. Die moderne Weltsicht erscheint dabei nicht als der Weisheit letzter Schluß, sondern nur als derzeit anerkanntes Abbild der Wirklichkeit..." #Ulrich Schnabel/Die Zeit#

  8. Overview Of Structural Behavior and Occupant Responses from Crash Test of a Composite Airplane

    NASA Technical Reports Server (NTRS)

    Jones, Lisa E.; Carden, Huey D.

    1995-01-01

    As part of NASA's composite structures crash dynamics research, a general aviation aircraft with composite wing, fuselage and empennage (but with metal subfloor structure) was crash tested at the NASA Langley Research Center Impact Research Facility. The test was conducted to determine composite aircraft structural behavior for crash loading conditions and to provide a baseline for a similar aircraft test with a modified subfloor. Structural integrity and cabin volume were maintained. Lumbar loads for dummy occupants in energy absorbing seats wer substantially lower than those in standard aircraft seats; however, loads in the standard seats were much higher that those recorded under similar conditions for an all-metallic aircraft.

  9. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  10. A prospective audit of a nurse independent prescribing within critical care.

    PubMed

    Carberry, Martin; Connelly, Sarah; Murphy, Jennifer

    2013-05-01

    To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.

  11. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  12. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  13. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  14. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  15. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  16. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  17. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  18. Citation Help in Databases: The More Things Change, the More They Stay the Same

    ERIC Educational Resources Information Center

    Van Ullen, Mary; Kessler, Jane

    2012-01-01

    In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…

  19. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  20. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  1. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  2. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  3. Organizational safety culture and medical error reporting by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2013-09-01

    To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.

  4. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  5. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    PubMed

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  8. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    NASA Astrophysics Data System (ADS)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  9. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  11. TECHNICAL ADVANCES: Effects of genotyping protocols on success and errors in identifying individual river otters (Lontra canadensis) from their faeces.

    PubMed

    Hansen, Heidi; Ben-David, Merav; McDonald, David B

    2008-03-01

    In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.

  12. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  13. Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?

    NASA Technical Reports Server (NTRS)

    Courter, B. J.; Jex, H. R.

    1984-01-01

    Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.

  14. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  15. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  16. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  17. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    PubMed

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.

  18. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  19. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  20. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  2. Bit-error rate for free-space adaptive optics laser communications.

    PubMed

    Tyson, Robert K

    2002-04-01

    An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.

  3. Transcriptional fidelities of human mitochondrial POLRMT, yeast mitochondrial Rpo41, and phage T7 single-subunit RNA polymerases.

    PubMed

    Sultana, Shemaila; Solotchi, Mihai; Ramachandran, Aparna; Patel, Smita S

    2017-11-03

    Single-subunit RNA polymerases (RNAPs) are present in phage T7 and in mitochondria of all eukaryotes. This RNAP class plays important roles in biotechnology and cellular energy production, but we know little about its fidelity and error rates. Herein, we report the error rates of three single-subunit RNAPs measured from the catalytic efficiencies of correct and all possible incorrect nucleotides. The average error rates of T7 RNAP (2 × 10 -6 ), yeast mitochondrial Rpo41 (6 × 10 -6 ), and human mitochondrial POLRMT (RNA polymerase mitochondrial) (2 × 10 -5 ) indicate high accuracy/fidelity of RNA synthesis resembling those of replicative DNA polymerases. All three RNAPs exhibit a distinctly high propensity for GTP misincorporation opposite dT, predicting frequent A→G errors in RNA with rates of ∼10 -4 The A→C, G→A, A→U, C→U, G→U, U→C, and U→G errors mostly due to pyrimidine-purine mismatches were relatively frequent (10 -5 -10 -6 ), whereas C→G, U→A, G→C, and C→A errors from purine-purine and pyrimidine-pyrimidine mismatches were rare (10 -7 -10 -10 ). POLRMT also shows a high C→A error rate on 8-oxo-dG templates (∼10 -4 ). Strikingly, POLRMT shows a high mutagenic bypass rate, which is exacerbated by TEFM (transcription elongation factor mitochondrial). The lifetime of POLRMT on terminally mismatched elongation substrate is increased in the presence of TEFM, which allows POLRMT to efficiently bypass the error and continue with transcription. This investigation of nucleotide selectivity on normal and oxidatively damaged DNA by three single-subunit RNAPs provides the basic information to understand the error rates in mitochondria and, in the case of T7 RNAP, to assess the quality of in vitro transcribed RNAs. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  4. A comparison of medication administration errors from original medication packaging and multi-compartment compliance aids in care homes: A prospective observational study.

    PubMed

    Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini

    2017-07-01

    No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less

  6. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    PubMed

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  7. Teamwork and clinical error reporting among nurses in Korean hospitals.

    PubMed

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  8. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  9. Can a two-hour lecture by a pharmacist improve the quality of prescriptions in a pediatric hospital? A retrospective cohort study.

    PubMed

    Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana

    2017-12-01

    A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.

  10. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  11. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  12. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  13. Reducing the Familiarity of Conjunction Lures with Pictures

    ERIC Educational Resources Information Center

    Lloyd, Marianne E.

    2013-01-01

    Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…

  14. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  15. Certification of ICI 1012 optical data storage tape

    NASA Technical Reports Server (NTRS)

    Howell, J. M.

    1993-01-01

    ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.

  16. The dependence of crowding on flanker complexity and target-flanker similarity

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2013-01-01

    We examined the effects of the spatial complexity of flankers and target-flanker similarity on the performance of identifying crowded letters. On each trial, observers identified the middle character of random strings of three characters (“trigrams”) briefly presented at 10° below fixation. We tested the 26 lowercase letters of the Times-Roman and Courier fonts, a set of 79 characters (letters and non-letters) of the Times-Roman font, and the uppercase letters of two highly complex ornamental fonts, Edwardian and Aristocrat. Spatial complexity of characters was quantified by the length of the morphological skeleton of each character, and target-flanker similarity was defined based on a psychometric similarity matrix. Our results showed that (1) letter identification error rate increases with flanker complexity up to a certain value, beyond which error rate becomes independent of flanker complexity; (2) the increase of error rate is slower for high-complexity target letters; (3) error rate increases with target-flanker similarity; and (4) mislocation error rate increases with target-flanker similarity. These findings, combined with the current understanding of the faulty feature integration account of crowding, provide some constraints of how the feature integration process could cause perceptual errors. PMID:21730225

  17. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  18. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  19. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  20. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  1. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  2. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    PubMed

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  3. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.

  4. Der Organismus der Mathematik - mikro-, makro- und mesoskopisch betrachtet

    NASA Astrophysics Data System (ADS)

    Winkler, Reinhard

    Meist enden ähnliche Gespräche über Mathematik etwa an diesem Punkt, ohne dass der Nichtmathematiker von der Sinnhaftigkeit mathematischer Forschung, ja mathematischer Tätigkeit generell überzeugt werden konnte. Ich glaube nicht, dass dem Laien Blindheit für die Großartigkeit unserer Wissenschaft vorzuwerfen ist, wenn hier keine befriedigendere Kommunikation zustande kommt. Ich sehe als Ursache eher ein stark verkürztes Bild von der Mathematik, welches auch Fachleute oft zeichnen, weil ihnen eine angemessenere Darstellung ihres Faches zu viel Mühe macht - und das obwohl Mathematik nur betreiben kann, wer geistige Mühen sonst keineswegs scheut. Ich will versuchen, den Ursachen dieses eigentümlichen Phänomens auf den Grund zu gehen.

  5. Acoustic levitation technique for containerless processing at high temperatures in space

    NASA Technical Reports Server (NTRS)

    Rey, Charles A.; Merkley, Dennis R.; Hammarlund, Gregory R.; Danley, Thomas J.

    1988-01-01

    High temperature processing of a small specimen without a container has been demonstrated in a set of experiments using an acoustic levitation furnace in the microgravity of space. This processing technique includes the positioning, heating, melting, cooling, and solidification of a material supported without physical contact with container or other surface. The specimen is supported in a potential energy well, created by an acoustic field, which is sufficiently strong to position the specimen in the microgravity environment of space. This containerless processing apparatus has been successfully tested on the Space Shuttle during the STS-61A mission. In that experiment, three samples wer successfully levitated and processed at temperatures from 600 to 1500 C. Experiment data and results are presented.

  6. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  7. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  8. A long-term follow-up evaluation of electronic health record prescribing safety

    PubMed Central

    Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu

    2013-01-01

    Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816

  9. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  10. The effectiveness of risk management program on pediatric nurses' medication error.

    PubMed

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  11. Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.

    PubMed

    Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim

    2007-09-01

    The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.

  12. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Error analysis of high-rate GNSS precise point positioning for seismic wave measurement

    NASA Astrophysics Data System (ADS)

    Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan

    2017-06-01

    High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.

  14. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  15. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  17. An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations.

    PubMed

    Beevi, Femina H A; Miranda, Jorge; Pedersen, Christian F; Wagner, Stefan

    2016-05-01

    Pedometers are considered desirable devices for monitoring physical activity. Two population groups of interest include patients having undergone surgery in the lower extremities or who are otherwise weakened through disease, medical treatment, or surgery procedures, as well as the slow walking senior population. For these population groups, pedometers must be able to perform reliably and accurately at slow walking speeds. The objectives of this study were to evaluate the step count accuracy of three commercially available pedometers, the Yamax (Tokyo, Japan) Digi-Walker(®) SW-200 (YM), the Omron (Kyoto, Japan) HJ-720 (OM), and the Fitbit (San Francisco, CA) Zip (FB), at slow walking speeds, specifically at 1, 2, and 3 km/h, and to raise awareness of the necessity of focusing research on step-counting devices and algorithms for slow walking populations. Fourteen participants 29.93 ±4.93 years of age were requested to walk on a treadmill at the three specified speeds, in four trials of 100 steps each. The devices were worn by the participants on the waist belt. The pedometer counts were recorded, and the error percentage was calculated. The error rate of all three evaluated pedometers decreased with the increase of speed: at 1 km/h the error rates varied from 87.11% (YM) to 95.98% (FB), at 2 km/h the error rates varied from 17.27% (FB) to 46.46% (YM), and at 3 km/h the error rates varied from 22.46% (YM) to a slight overcount of 0.70% (FB). It was observed that all the evaluated devices have high error rates at 1 km/h and mixed error rates at 2 km/h, and at 3 km/h the error rates are the smallest of the three assessed speeds, with the OM and the FB having a slight overcount. These results show that research on pedometers' software and hardware should focus more on accurate step detection at slow walking speeds.

  18. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  19. Refractive errors in medical students in Singapore.

    PubMed

    Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M

    2004-10-01

    Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.

  20. Social deviance activates the brain's error-monitoring system.

    PubMed

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  1. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  2. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  3. An Automated Method to Generate e-Learning Quizzes from Online Language Learner Writing

    ERIC Educational Resources Information Center

    Flanagan, Brendan; Yin, Chengjiu; Hirokawa, Sachio; Hashimoto, Kiyota; Tabata, Yoshiyuki

    2013-01-01

    In this paper, the entries of Lang-8, which is a Social Networking Site (SNS) site for learning and practicing foreign languages, were analyzed and found to contain similar rates of errors for most error categories reported in previous research. These similarly rated errors were then processed using an algorithm to determine corrections suggested…

  4. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  5. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  6. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  7. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  8. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  9. Mapping DNA polymerase errors by single-molecule sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David F.; Lu, Jenny; Chang, Seungwoo

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  10. Mapping DNA polymerase errors by single-molecule sequencing

    DOE PAGES

    Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...

    2016-05-16

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  11. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  12. The assessment of cognitive errors using an observer-rated method.

    PubMed

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  13. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  14. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  15. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    PubMed

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  16. Decoy-state quantum key distribution with more than three types of photon intensity pulses

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2018-04-01

    The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.

  17. Families as Partners in Hospital Error and Adverse Event Surveillance

    PubMed Central

    Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.

    2017-01-01

    IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211

  18. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  19. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  20. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  1. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  2. Antidepressant and antipsychotic medication errors reported to United States poison control centers.

    PubMed

    Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-05-08

    To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  4. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  5. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  6. Prediction of pilot reserve attention capacity during air-to-air target tracking

    NASA Technical Reports Server (NTRS)

    Onstott, E. D.; Faulkner, W. H.

    1977-01-01

    Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.

  7. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  8. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  9. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  10. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  11. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  12. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  13. The Use of Categorized Time-Trend Reporting of Radiation Oncology Incidents: A Proactive Analytical Approach to Improving Quality and Safety Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette

    Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less

  14. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  15. Renal Drug Dosing

    PubMed Central

    Vogel, Erin A.; Billups, Sarah J.; Herner, Sheryl J.

    2016-01-01

    Summary Objective The purpose of this study was to compare the effectiveness of an outpatient renal dose adjustment alert via a computerized provider order entry (CPOE) clinical decision support system (CDSS) versus a CDSS with alerts made to dispensing pharmacists. Methods This was a retrospective analysis of patients with renal impairment and 30 medications that are contraindicated or require dose-adjustment in such patients. The primary outcome was the rate of renal dosing errors for study medications that were dispensed between August and December 2013, when a pharmacist-based CDSS was in place, versus August through December 2014, when a prescriber-based CDSS was in place. A dosing error was defined as a prescription for one of the study medications dispensed to a patient where the medication was contraindicated or improperly dosed based on the patient’s renal function. The denominator was all prescriptions for the study medications dispensed during each respective study period. Results During the pharmacist- and prescriber-based CDSS study periods, 49,054 and 50,678 prescriptions, respectively, were dispensed for one of the included medications. Of these, 878 (1.8%) and 758 (1.5%) prescriptions were dispensed to patients with renal impairment in the respective study periods. Patients in each group were similar with respect to age, sex, and renal function stage. Overall, the five-month error rate was 0.38%. Error rates were similar between the two groups: 0.36% and 0.40% in the pharmacist- and prescriber-based CDSS, respectively (p=0.523). The medication with the highest error rate was dofetilide (0.51% overall) while the medications with the lowest error rate were dabigatran, fondaparinux, and spironolactone (0.00% overall). Conclusions Prescriber- and pharmacist-based CDSS provided comparable, low rates of potential medication errors. Future studies should be undertaken to examine patient benefits of the prescriber-based CDSS. PMID:27466041

  16. Publication bias was not a good reason to discourage trials with low power.

    PubMed

    Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A

    2009-01-01

    The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.

  17. Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors

    PubMed Central

    Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja

    2015-01-01

    Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873

  18. [Validation of a method for notifying and monitoring medication errors in pediatrics].

    PubMed

    Guerrero-Aznar, M D; Jiménez-Mesa, E; Cotrina-Luque, J; Villalba-Moreno, A; Cumplido-Corbacho, R; Fernández-Fernández, L

    2014-12-01

    To analyze the impact of a multidisciplinary and decentralized safety committee in the pediatric management unit, and the joint implementation of a computing network application for reporting medication errors, monitoring the follow-up of the errors, and an analysis of the improvements introduced. An observational, descriptive, cross-sectional, pre-post intervention study was performed. An analysis was made of medication errors reported to the central safety committee in the twelve months prior to introduction, and those reported to the decentralized safety committee in the management unit in the nine months after implementation, using the computer application, and the strategies generated by the analysis of reported errors. Number of reported errors/10,000 days of stay, number of reported errors with harm per 10,000 days of stay, types of error, categories based on severity, stage of the process, and groups involved in the notification of medication errors. Reported medication errors increased 4.6 -fold, from 7.6 notifications of medication errors per 10,000 days of stay in the pre-intervention period to 36 in the post-intervention, rate ratio 0.21 (95% CI; 0.11-0.39) (P<.001). The medication errors with harm or requiring monitoring reported per 10,000 days of stay, was virtually unchanged from one period to the other ratio rate 0,77 (95% IC; 0,31-1,91) (P>.05). The notification of potential errors or errors without harm per 10,000 days of stay increased 17.4-fold (rate ratio 0.005., 95% CI; 0.001-0.026, P<.001). The increase in medication errors notified in the post-intervention period is a reflection of an increase in the motivation of health professionals to report errors through this new method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  19. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Kalet, A; Smith, W

    2016-06-15

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less

  20. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Classification of echolocation clicks from odontocetes in the Southern California Bight.

    PubMed

    Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A

    2011-01-01

    This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.

  2. Comparison of disagreement and error rates for three types of interdepartmental consultations.

    PubMed

    Renshaw, Andrew A; Gould, Edwin W

    2005-12-01

    Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.

  3. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  4. Analysis of Soft Error Rates in 65- and 28-nm FD-SOI Processes Depending on BOX Region Thickness and Body Bias by Monte-Carlo Based Simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi

    2016-08-01

    This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.

  5. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  6. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  7. Frozen section analysis of margins for head and neck tumor resections: reduction of sampling errors with a third histologic level.

    PubMed

    Olson, Stephen M; Hussaini, Mohammad; Lewis, James S

    2011-05-01

    Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.

  8. Global Vertical Rates from VLBl

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; MacMillan, D.; Petrov, L.

    2003-01-01

    The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.

  9. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  10. Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.

    PubMed

    Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean

    2013-06-01

    Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.

  11. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  13. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  14. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  15. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  16. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  17. ADEPT, a dynamic next generation sequencing data error-detection program with trimming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Shihai; Lo, Chien-Chi; Li, Po-E

    Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less

  18. ADEPT, a dynamic next generation sequencing data error-detection program with trimming

    DOE PAGES

    Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...

    2016-02-29

    Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less

  19. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    PubMed

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  20. Sleep quality, but not quantity, is associated with self-perceived minor error rates among emergency department nurses.

    PubMed

    Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M

    2016-03-01

    The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  2. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  3. Comparison of a Virtual Older Driver Assessment with an On-Road Driving Test.

    PubMed

    Eramudugolla, Ranmalee; Price, Jasmine; Chopra, Sidhant; Li, Xiaolan; Anstey, Kaarin J

    2016-12-01

    To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. Cross-sectional observational study. Canberra, Australia. Older adult drivers (N = 47; aged 65-88, mean age 75.2). Error rate on a simulated drive with environment and scoring procedure matched to those of an on-road test. Other measures included participant age, simulator sickness severity, neuropsychological measures, and driver screening measures. Outcome variables included occupational therapist (OT)-rated on-road errors, on-road safety rating, and safety category. Participants' error rate on the simulated drive was significantly correlated with their OT-rated driving safety (correlation coefficient (r) = -0.398, P = .006), even after adjustment for age and simulator sickness (P = .009). The simulator error rate was a significant predictor of categorization as unsafe on the road (P = .02, sensitivity 69.2%, specificity 100%), with 13 (27%) drivers assessed as unsafe. Simulator error was also associated with other older driver safety screening measures such as useful field of view (r = 0.341, P = .02), DriveSafe (r = -0.455, P < .01), and visual motion sensitivity (r = 0.368, P = .01) but was not associated with memory (delayed word recall) or global cognition (Mini-Mental State Examination). Drivers made twice as many errors on the simulated assessment as during the on-road assessment (P < .001), with significant differences in the rate and type of errors between the two mediums. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  4. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  5. Relations between Response Trajectories on the Continuous Performance Test and Teacher-Rated Problem Behaviors in Preschoolers

    PubMed Central

    Allan, Darcey M.; Lonigan, Christopher J.

    2014-01-01

    Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645

  6. Relations between response trajectories on the continuous performance test and teacher-rated problem behaviors in preschoolers.

    PubMed

    Allan, Darcey M; Lonigan, Christopher J

    2015-06-01

    Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).

  7. Comparison of medication safety systems in critical access hospitals: Combined analysis of two studies.

    PubMed

    Cochran, Gary L; Barrett, Ryan S; Horn, Susan D

    2016-08-01

    The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  8. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  9. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  10. The incidence and severity of errors in pharmacist-written discharge medication orders.

    PubMed

    Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah

    2017-08-01

    Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.

  11. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  12. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  13. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    PubMed

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, p<.001). There was no significant difference in some categories for organizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  15. The Relationship among Correct and Error Oral Reading Rates and Comprehension.

    ERIC Educational Resources Information Center

    Roberts, Michael; Smith, Deborah Deutsch

    1980-01-01

    Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…

  16. Transition year labeling error characterization study. [Kansas, Minnesota, Montana, North Dakota, South Dakota, and Oklahoma

    NASA Technical Reports Server (NTRS)

    Clinton, N. J. (Principal Investigator)

    1980-01-01

    Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.

  17. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  18. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  19. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  20. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  1. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  2. Paediatric electronic infusion calculator: An intervention to eliminate infusion errors in paediatric critical care.

    PubMed

    Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran

    2016-11-01

    Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p  < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.

  3. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  4. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  5. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  6. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  7. Estimating error rates for firearm evidence identifications in forensic science

    PubMed Central

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  8. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  9. Estimating error rates for firearm evidence identifications in forensic science.

    PubMed

    Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan

    2018-03-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.

  10. Data quality in a DRG-based information system.

    PubMed

    Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y

    1994-09-01

    The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Non-health care facility anticonvulsant medication errors in the United States.

    PubMed

    DeDonato, Emily A; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-06-01

    This study provides an epidemiological description of non-health care facility medication errors involving anticonvulsant drugs. A retrospective analysis of National Poison Data System data was conducted on non-health care facility medication errors involving anticonvulsant drugs reported to US Poison Control Centers from 2000 through 2012. During the study period, 108,446 non-health care facility medication errors involving anticonvulsant pharmaceuticals were reported to US Poison Control Centers, averaging 8342 exposures annually. The annual frequency and rate of errors increased significantly over the study period, by 96.6 and 76.7%, respectively. The rate of exposures resulting in health care facility use increased by 83.3% and the rate of exposures resulting in serious medical outcomes increased by 62.3%. In 2012, newer anticonvulsants, including felbamate, gabapentin, lamotrigine, levetiracetam, other anticonvulsants (excluding barbiturates), other types of gamma aminobutyric acid, oxcarbazepine, topiramate, and zonisamide, accounted for 67.1% of all exposures. The rate of non-health care facility anticonvulsant medication errors reported to Poison Control Centers increased during 2000-2012, resulting in more frequent health care facility use and serious medical outcomes. Newer anticonvulsants, although often considered safer and more easily tolerated, were responsible for much of this trend and should still be administered with caution.

  12. Predicting and interpreting identification errors in military vehicle training using multidimensional scaling.

    PubMed

    Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R

    2014-01-01

    We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.

  13. Development of a press and drag method for hyperlink selection on smartphones.

    PubMed

    Chang, Joonho; Jung, Kihyo

    2017-11-01

    The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    PubMed

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.

  15. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  16. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  17. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  18. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Final report on the development of the geographic position locator (GPL). Volume 12. Data reduction A3FIX: subroutine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niven, W.A.

    The long-term position accuracy of an inertial navigation system depends primarily on the ability of the gyroscopes to maintain a near-perfect reference orientation. Small imperfections in the gyroscopes cause them to drift slowly away from their initial orientation, thereby producing errors in the system's calculations of position. The A3FIX is a computer program subroutine developed to estimate inertial navigation system gyro drift rates with the navigator stopped or moving slowly. It processes data of the navigation system's position error to arrive at estimates of the north- south and vertical gyro drift rates. It also computes changes in the east--west gyromore » drift rate if the navigator is stopped and if data on the system's azimuth error changes are also available. The report describes the subroutine, its capabilities, and gives examples of gyro drift rate estimates that were computed during the testing of a high quality inertial system under the PASSPORT program at the Lawrence Livermore Laboratory. The appendices provide mathematical derivations of the estimation equations that are used in the subroutine, a discussion of the estimation errors, and a program listing and flow diagram. The appendices also contain a derivation of closed form solutions to the navigation equations to clarify the effects that motion and time-varying drift rates induce in the phase-plane relationships between the Schulerfiltered errors in latitude and azimuth snd between the Schulerfiltered errors in latitude and longitude. (auth)« less

  20. Differential Effects of Incentives on Response Error, Response Rate, and Reliability of a Mailed Questionnaire.

    ERIC Educational Resources Information Center

    Brown, Darine F.; Hartman, Bruce

    1980-01-01

    Investigated issues associated with stimulating increased return rates to a mail questionnaire among school counselors. Results show that as the number of incentives received increased, the return rates increased in a linear fashion. The incentives did not introduce response error or affect the reliability of the Counselor Function Inventory.…

  1. The Sustained Influence of an Error on Future Decision-Making.

    PubMed

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  2. Online patient safety education programme for junior doctors: is it worthwhile?

    PubMed

    McCarthy, S E; O'Boyle, C A; O'Shaughnessy, A; Walsh, G

    2016-02-01

    Increasing demand exists for blended approaches to the development of professionalism. Trainees of the Royal College of Physicians of Ireland participated in an online patient safety programme. Study aims were: (1) to determine whether the programme improved junior doctors' knowledge, attitudes and skills relating to error reporting, open communication and care for the second victim and (2) to establish whether the methodology facilitated participants' learning. 208 junior doctors who completed the programme completed a pre-online questionnaire. Measures were "patient safety knowledge and attitudes", "medical safety climate" and "experience of learning". Sixty-two completed the post-questionnaire, representing a 30 % matched response rate. Participating in the programme resulted in immediate (p < 0.01) improvement in skills such as knowing when and how to complete incident forms and disclosing errors to patients, in self-rated knowledge (p < 0.01) and attitudes towards error reporting (p < 0.01). Sixty-three per cent disagreed that doctors routinely report medical errors and 42 % disagreed that doctors routinely share information about medical errors and what caused them. Participants rated interactive features as the most positive elements of the programme. An online training programme on medical error improved self-rated knowledge, attitudes and skills in junior doctors and was deemed an effective learning tool. Perceptions of work issues such as a poor culture of error reporting among doctors may prevent improved attitudes being realised in practice. Online patient safety education has a role in practice-based initiatives aimed at developing professionalism and improving safety.

  3. Decreasing patient identification band errors by standardizing processes.

    PubMed

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  4. Analysis of limiting information characteristics of quantum-cryptography protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sych, D V; Grishanin, Boris A; Zadkov, Viktor N

    2005-01-31

    The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)

  5. History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.

    PubMed

    Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A

    2015-07-01

    Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.

  6. Making electronic prescribing alerts more effective: scenario-based experimental study in junior doctors

    PubMed Central

    Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W

    2011-01-01

    Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158

  7. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  8. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  9. Estimation of attitude sensor timetag biases

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1995-01-01

    This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.

  10. Effects of uncertainty and variability on population declines and IUCN Red List classifications.

    PubMed

    Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M

    2018-01-22

    The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.

  11. Comparison of Minocycline Susceptibility Testing Methods for Carbapenem-Resistant Acinetobacter baumannii.

    PubMed

    Wang, Peng; Bowler, Sarah L; Kantz, Serena F; Mettus, Roberta T; Guo, Yan; McElheny, Christi L; Doi, Yohei

    2016-12-01

    Treatment options for infections due to carbapenem-resistant Acinetobacter baumannii are extremely limited. Minocycline is a semisynthetic tetracycline derivative with activity against this pathogen. This study compared susceptibility testing methods that are used in clinical microbiology laboratories (Etest, disk diffusion, and Sensititre broth microdilution methods) for testing of minocycline, tigecycline, and doxycycline against 107 carbapenem-resistant A. baumannii clinical isolates. Susceptibility rates determined with the standard broth microdilution method using cation-adjusted Mueller-Hinton (MH) broth were 77.6% for minocycline and 29% for doxycycline, and 92.5% of isolates had tigecycline MICs of ≤2 μg/ml. Using MH agar from BD and Oxoid, susceptibility rates determined with the Etest method were 67.3% and 52.3% for minocycline, 21.5% and 18.7% for doxycycline, and 71% and 29.9% for tigecycline, respectively. With the disk diffusion method using MH agar from BD and Oxoid, susceptibility rates were 82.2% and 72.9% for minocycline and 34.6% and 34.6% for doxycycline, respectively, and rates of MICs of ≤2 μg/ml were 46.7% and 23.4% for tigecycline. In comparison with the standard broth microdilution results, very major rates were low (∼2.8%) for all three drugs across the methods, but major error rates were higher (∼5.6%), especially with the Etest method. For minocycline, minor error rates ranged from 14% to 37.4%. For tigecycline, minor error rates ranged from 6.5% to 69.2%. The majority of minor errors were due to susceptible results being reported as intermediate. For minocycline susceptibility testing of carbapenem-resistant A. baumannii strains, very major errors are rare, but major and minor errors overcalling strains as intermediate or resistant occur frequently with susceptibility testing methods that are feasible in clinical laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  12. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  13. DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases

    PubMed Central

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2015-01-01

    Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827

  14. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  15. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  16. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  18. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  19. Frame error rate for single-hop and dual-hop transmissions in 802.15.4 LoWPANs

    NASA Astrophysics Data System (ADS)

    Biswas, Sankalita; Ghosh, Biswajit; Chandra, Aniruddha; Dhar Roy, Sanjay

    2017-08-01

    IEEE 802.15.4 is a popular standard for personal area networks used in different low-rate short-range applications. This paper examines the error rate performance of 802.15.4 in fading wireless channel. An analytical model is formulated for evaluating frame error rate (FER); first, for direct single-hop transmission between two sensor nodes, and second, for dual-hop (DH) transmission using an in-between relay node. During modeling the transceiver design parameters are chosen according to the specifications set for both the 2.45 GHz and 868/915 MHz bands. We have also developed a simulation test bed for evaluating FER. Some results showed expected trends, such as FER is higher for larger payloads. Other observations are not that intuitive. It is interesting to note that the error rates are significantly higher for the DH case and demands a signal-to-noise ratio (SNR) penalty of about 7 dB. Also, the FER shoots from zero to one within a very small range of SNR.

  20. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Top