Science.gov

Sample records for acceptable error rates

  1. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors.

    PubMed

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses' demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses' employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  2. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors

    PubMed Central

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses’ demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses’ employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  3. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  4. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  5. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  6. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Content of Error Rate Reports. 98.102 Section 98... DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report—At a minimum, States, the District of Columbia and Puerto Rico shall submit an initial error...

  7. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  10. Defining Error Rates and Power for Detecting Answer Copying.

    ERIC Educational Resources Information Center

    Wollack, James A.; Cohen, Allan S.; Serlin, Ronald C.

    2001-01-01

    Developed a family wise approach for evaluating the significance of copying indices designed to hold the Type I error rate constant for each examinee. Examined the Type I error rate and power of two indices under a variety of copying situations. Results indicate the superiority of a family wise definition of Type I error rate over a pair-wise…

  11. Logical error rate in the Pauli twirling approximation

    PubMed Central

    Katabarwa, Amara; Geller, Michael R.

    2015-01-01

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA’s accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes. PMID:26419417

  12. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  13. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  14. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    PubMed Central

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  15. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGESBeta

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  16. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  17. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  18. Improved coded optical communication error rates using joint detection receivers

    NASA Astrophysics Data System (ADS)

    Dutton, Zachary; Guha, Saikat; Chen, Jian; Habif, Jonathan; Lazarus, Richard

    2012-02-01

    It is now known that coherent state (laser light) modulation is sufficient to reach the ultimate quantum limit (the Holevo bound) for classical communication capacity. However, all current optical communication systems are fundamentally limited in capacity because they perform measurements on single symbols at a time. To reach the Holevo bound, joint quantum measurements over long symbol blocks will be required. We recently proposed and demonstrated the ``conditional pulse nulling'' (CPN) receiver -- which acts jointly on the time slots of a pulse-position-modulation (PPM) codeword by employing pulse nulling and quantum feedforward -- and demonstrated a 2.3 dB improvement in error rate over direct detection (DD). In a communication system coded error rates are made arbitrary small by employing an outer code (such as Reed-Solomon (RS)). Here we analyze RS coding of PPM errors with both DD and CPN receivers and calculate the outer code length requirements. We find the improved PPM error rates with the CPN translates into >10 times improvement in the required outer code length at high rates. This advantage also translates increase the range for a given coding complexity. In addition, we present results for outer coded error rates of our recently proposed ``Green Machine'' which realizes a joint detection advantage for binary phase shift keyed (BPSK) modulation.

  19. Error Growth Rate in the MM5 Model

    NASA Astrophysics Data System (ADS)

    Ivanov, S.; Palamarchuk, J.

    2006-12-01

    The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.

  20. PVUSA procurement, acceptance, and rating practices for photovoltaic power plants

    SciTech Connect

    Dows, R.N.; Gough, E.J.

    1995-09-01

    This report is one in a series of PVUSA reports on PVUSA experiences and lessons learned at the demonstration sites in Davis and Kerman, California, and from participating utility host sites. During the course of approximately 7 years (1988--1994), 10 PV systems have been installed ranging from 20 kW to 500 kW. Six 20-kW emerging module technology arrays, five on universal project-provided structures and one turnkey concentrator, and four turnkey utility-scale systems (200 to 500 kW) were installed. PVUSA took a very proactive approach in the procurement of these systems. In the absence of established procurement documents, the project team developed a comprehensive set of technical and commercial documents. These have been updated with each successive procurement. Working closely with vendors after the award in a two-way exchange provided designs better suited for utility applications. This report discusses the PVUSA procurement process through testing and acceptance, and rating of PV turnkey systems. Special emphasis is placed on the acceptance testing and rating methodology which completes the procurement process by verifying that PV systems meet contract requirements. Lessons learned and recommendations are provided based on PVUSA experience.

  1. Neutron-induced soft error rate measurements in semiconductor memories

    NASA Astrophysics Data System (ADS)

    Ünlü, Kenan; Narayanan, Vijaykrishnan; Çetiner, Sacit M.; Degalahal, Vijay; Irwin, Mary J.

    2007-08-01

    Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed.

  2. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  3. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  4. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  5. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  6. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  7. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    conventional IMRT QA performance metrics (Gamma passing rates) and dose errors in anatomic regions-of-interest. The most common acceptance criteria and published actions levels therefore have insufficient, or at least unproven, predictive power for per-patient IMRT QA.

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  9. Acceptance rate and reasons for rejection of manuscripts submitted to Veterinary Radiology & Ultrasound during 2012.

    PubMed

    Lamb, Christopher R; Mai, Wilfried

    2015-01-01

    Better understanding of the reasons why manuscripts are rejected, and recognition of the most frequent manuscript flaws identified by reviewers, should help submitting authors to avoid these pitfalls. Of 219 manuscripts submitted to Veterinary Radiology & Ultrasound in 2012, none (0%) was accepted without revision, four (2%) were withdrawn by the authors, 99 (45%) were accepted after revision, and 116 (53%) were rejected. All manuscripts for which minor revision was requested, and 73/86 (85%) manuscripts for which major revision was requested, were ultimately accepted. Acceptance rate was greater for retrospective studies and for manuscripts submitted from countries in which English was the primary language. The prevalences of flaws in manuscripts were poor writing (62%), deficiencies in data (60%), logical or methodological errors (28%), content not suitable for Veterinary Radiology & Ultrasound (26%), and lack of new or useful knowledge (25%). Likelihood of manuscript rejection was greater for lack of new or useful knowledge and content not suitable than for other manuscript flaws. The lower acceptance rate for manuscripts from countries in which English was not the primary language was associated with content not suitable and not poor writing. Submitting authors are encouraged to do more to recognize and address manuscript flaws before submission, for example by internal review. Specifically, submitting authors should express clearly the potential added value of their study in the introduction section of their manuscript, describe completely their methods and results, and consult the Editor-in-Chief if they are uncertain whether their subject matter would be suitable for the journal. PMID:24798652

  10. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  11. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  12. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  13. Acceptance test procedure for the 105-KW isolation barrier leak rate

    SciTech Connect

    McCracken, K.J.

    1995-05-19

    This acceptance test procedure shall be used to: First establish a basin water loss rate prior to installation of the two isolation barriers between the main basin and the discharge chute in K-Basin West. Second, perform an acceptance test to verify an acceptable leakage rate through the barrier seals. This Acceptance Test Procedure (ATP) has been prepared in accordance with CM-6-1 EP 4.2, Standard Engineering Practices.

  14. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. PMID:27230694

  15. 49 CFR 33.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... received on the same day, the person must accept, based upon the earliest delivery dates, only those orders... forms of civil transportation: (1) A person shall not accept a rated order for delivery on a specific... earliest date on which delivery can be made and offer to accept the order on the basis of that...

  16. 49 CFR 33.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... received on the same day, the person must accept, based upon the earliest delivery dates, only those orders... forms of civil transportation: (1) A person shall not accept a rated order for delivery on a specific... earliest date on which delivery can be made and offer to accept the order on the basis of that...

  17. 49 CFR 33.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... received on the same day, the person must accept, based upon the earliest delivery dates, only those orders... forms of civil transportation: (1) A person shall not accept a rated order for delivery on a specific... earliest date on which delivery can be made and offer to accept the order on the basis of that...

  18. 15 CFR 700.13 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... fill all the rated orders of equal priority status received on the same day, the person must accept... person shall not accept a rated order for delivery on a specific date if unable to fill the order by that date. However, the person must inform the customer of the earliest date on which delivery can be...

  19. 15 CFR 700.13 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... fill all the rated orders of equal priority status received on the same day, the person must accept... person shall not accept a rated order for delivery on a specific date if unable to fill the order by that date. However, the person must inform the customer of the earliest date on which delivery can be...

  20. 15 CFR 700.13 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... fill all the rated orders of equal priority status received on the same day, the person must accept... person shall not accept a rated order for delivery on a specific date if unable to fill the order by that date. However, the person must inform the customer of the earliest date on which delivery can be...

  1. 10 CFR 217.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... fill all of the rated orders of equal priority status received on the same day, the person must accept... not accept a rated order for delivery on a specific date if unable to fill the order by that date. However, the person must inform the customer of the earliest date on which delivery can be made and...

  2. 10 CFR 217.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... fill all of the rated orders of equal priority status received on the same day, the person must accept... not accept a rated order for delivery on a specific date if unable to fill the order by that date. However, the person must inform the customer of the earliest date on which delivery can be made and...

  3. 15 CFR 700.13 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Acceptance and rejection of rated... and rejection of rated orders. (a) Mandatory acceptance. (1) Except as otherwise specified in this... for comparable unrated orders. (b) Mandatory rejection. Unless otherwise directed by Commerce: (1)...

  4. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Interim acceptance and review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water... Review and Approval § 300.20 Interim acceptance and review of Bonneville Power Administration rates....

  5. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Interim acceptance and review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water... Review and Approval § 300.20 Interim acceptance and review of Bonneville Power Administration rates....

  6. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Interim acceptance and review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water... Review and Approval § 300.20 Interim acceptance and review of Bonneville Power Administration rates....

  7. The Interrelationships between Ratings of Speech and Facial Acceptability in Persons with Cleft Palate.

    ERIC Educational Resources Information Center

    Sinko, Garnet R.; Hedrick, Dona L.

    1982-01-01

    Thirty untrained young adult observers rated the speech and facial acceptablity of 20 speakers with cleft palate. The observers were reliable in rating both speech and facial acceptability. Judgments of facial acceptability were generally more positive, suggesting that speech is generally judged more negatively in speakers with cleft palate.…

  8. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  9. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of... the sample with an improper payment compared to the total number of cases in the sample; (3... improper payments in the sample compared to the total dollar amount of payments made in the sample;...

  10. Improving bit error rate through multipath differential demodulation

    NASA Astrophysics Data System (ADS)

    Lize, Yannick Keith; Christen, Louis; Nuccio, Scott; Willner, Alan E.; Kashyap, Raman

    2007-02-01

    Differential phase shift keyed transmission (DPSK) is currently under serious consideration as a deployable datamodulation format for high-capacity optical communication systems due mainly to its 3 dB OSNR advantage over intensity modulation. However DPSK OSNR requirements are still 3 dB higher than its coherent counter part, PSK. Some strategies have been proposed to reduce this penalty through multichip soft detection but the improvement is limited to 0.3dB at BER 10-3. Better performance is expected from other soft-detection schemes using feedback control but the implementation is not straight forward. We present here an optical multipath error correction technique for differentially encoded modulation formats such as differential-phase-shift-keying (DPSK) and differential polarization shift keying (DPolSK) for fiber-based and free-space communication. This multipath error correction method combines optical and electronic logic gates. The scheme can easily be implemented using commercially available interferometers and high speed logic gates and does not require any data overhead therefore does not affect the effective bandwidth of the transmitted data. It is not merely compatible but also complementary to error correction codes commonly used in optical transmission systems such as forward-error-correction (FEC). The technique consists of separating the demodulation at the receiver in multiple paths. Each path consists of a Mach-Zehnder interferometer with an integer bit delay and a different delay is used in each path. Some basic logical operations follow and the three paths are compared using a simple majority vote algorithm. Receiver sensitivity is improved by 0.35 dB in simulations and 1.5 dB experimentally at BER of 10-3.

  11. 10 CFR 217.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... directed by the Department of Energy for a rated order involving all forms of energy: (1) A person shall... directed by the Department of Energy for a rated order involving all forms of energy, rated orders may be... 10 Energy 3 2014-01-01 2014-01-01 false Acceptance and rejection of rated orders. 217.33...

  12. A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Duy, Tran Trung; Kong, Hyung Yun

    In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.

  13. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  14. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  15. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  16. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2016-03-22

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  17. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    PubMed Central

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2013-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed. PMID:24319307

  18. Design and verification of a bit error rate tester in Altera FPGA for optical link developments

    NASA Astrophysics Data System (ADS)

    Cao, T.; Chang, J.; Gong, D.; Liu, C.; Liu, T.; Xiang, A.; Ye, J.

    2010-12-01

    This paper presents a custom bit error rate (BER) tester implementation in an Altera Stratix II GX signal integrity development kit. This BER tester deploys a parallel to serial pseudo random bit sequence (PRBS) generator, a bit and link status error detector and an error logging FIFO. The auto-correlation pattern enables receiver synchronization without specifying protocol at the physical layer. The error logging FIFO records both bit error data and link operation events. The tester's BER and data acquisition functions are utilized in a proton test of a 5 Gbps serializer. Experimental and data analysis results are discussed.

  19. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  20. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water... Review and Approval § 300.20 Interim acceptance and review of Bonneville Power Administration rates. (a) Opportunity to comment. The Commission will publish in the Federal Register notice of any filing made...

  1. Optimal GSTDN/TDRSS bit error rate evaluation using limited sample sizes

    NASA Technical Reports Server (NTRS)

    Coffey, R. E.; Lawrence, G. M.; Stuart, J. R.

    1982-01-01

    Statistical studies of telemetry errors were made on data from the Solar Mesosphere Explorer (SME). Examination of frame sync words, as received at the ground station, indicated a wide spread of Bit Error Rates (BER) among stations. A study of the distribution of errors per station pass, however, showed that there was a tendency for the station software to add an even number of spurious errors to the count. A count of wild points in science data, rejecting drop-outs and other system errors, yielded an average random BER of 3.1 x 10 to the -6 with 99% confidence limits of 2.6 and 3.8 x 10 to the -6. The system errors are typically 5 to 100 times more frequent than the truly random errors.

  2. Effects of speaking rate on the acceptability of change in segmental duration within a phrase

    NASA Astrophysics Data System (ADS)

    Muto, Makiko; Kato, Hiroaki; Tsuzaki, Minoru; Sagisaka, Yoshinori

    2001-05-01

    To contribute to the naturalness criteria of speech synthesis, acceptability of changes in segment duration has been investigated. Previous studies showed context dependency of the acceptability evaluation such as intraphrase positional effect, where listeners were more sensitive to the phrase-initial segment duration than the phrase-final one. Such contextual effects were independent of the original durations of the segments tested [Kato et al., J. Acoust. Soc. Am. 104, 540-549 (1998)]. However, past studies used only normal-speed speech and temporal variation was limited. The current study, therefore, examined the contextual effect with a wide variety of speaking rates. The materials were three-mora phrases with either rising or falling accent that were spoken at three rates (fast, normal, and slow) with or without a carrier sentence. The duration of each vowel was either lengthened or shortened (10-50 ms) and listeners evaluated the acceptability of these changes. The results showed a clear speaking-rate effect in parallel with the intraphrase positional effect: the acceptability declined more rapidly as the speaking rate became faster. These results, along with those of Kato et al., suggest that acceptability is evaluated based on the speaking rate rather than on the original duration itself. [Work supported by TAO, Japan.] a)Currently at GITI, Waseda University.

  3. Construct and Predictive Validity of Social Acceptability: Scores From High School Teacher Ratings on the School Intervention Rating Form

    ERIC Educational Resources Information Center

    Harrison, Judith R.; State, Talida M.; Evans, Steven W.; Schamberg, Terah

    2016-01-01

    The purpose of this study was to evaluate the construct and predictive validity of scores on a measure of social acceptability of class-wide and individual student intervention, the School Intervention Rating Form (SIRF), with high school teachers. Utilizing scores from 158 teachers, exploratory factor analysis revealed a three-factor (i.e.,…

  4. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  5. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  6. Speech Rate Acceptance Ranges as a Function of Evaluative Domain, Listener Speech Rate, and Communication Context.

    ERIC Educational Resources Information Center

    Street, Richard L., Jr.; Brady, Robert M.

    1982-01-01

    Speech rate appears to be an important communicative dimension upon which people evaluate the speech of others. Findings of this study indicate that speech rates at moderate through fast levels generated more favorable impressions of competence and social attractiveness than did slow speech. (PD)

  7. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Interim acceptance and review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER...

  8. Exploring the Vocational Rehabilitation Acceptance Rates for Hispanics Versus Non-Hispanics in the United States

    ERIC Educational Resources Information Center

    Wilson, Keith B.; Senices, Julissa

    2005-01-01

    Compared with other ethnic minorities in the United States, numbers for the Hispanic population are significantly escalating. In the study described here, a chi-square test of proportions was used to examine the vocational rehabilitation (VR) acceptance rates among Hispanics and non-Hispanics. The test statistic revealed a statistically…

  9. An Examination of Three Texas High Schools' Restructuring Strategies that Resulted in an Academically Acceptable Rating

    ERIC Educational Resources Information Center

    Massey Fields, Chamara

    2011-01-01

    This study examined three high schools in a large urban school district in Texas that achieved an academically acceptable rating after being sanctioned to reconstitute by state agencies. Texas state accountability standards are a result of the No Child Left Behind Act of 2011 (NCLB). Texas state law requires schools to design a reconstitution plan…

  10. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors

    PubMed Central

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Introduction: Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. Materials and methods: This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. Results: A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. Conclusions: The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples. PMID:25351356

  11. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  12. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  13. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  14. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  15. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  16. High speed and adaptable error correction for megabit/s rate quantum key distribution

    NASA Astrophysics Data System (ADS)

    Dixon, A. R.; Sato, H.

    2014-12-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  17. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  18. Methylphenidate improves diminished error and feedback sensitivity in ADHD: An evoked heart rate analysis.

    PubMed

    Groen, Yvonne; Mulder, Lambertus J M; Wijers, Albertus A; Minderaa, Ruud B; Althaus, Monika

    2009-09-01

    Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and the modulating effects of medication. Children with ADHD, on and off Methylphenidate (Mph), and typically developing (TD) children performed a selective attention task with three feedback conditions: reward, punishment and no feedback. Evoked Heart Rate (EHR) responses were computed for correct and error trials. All groups performed more efficiently with performance feedback than without. EHR analyses, however, showed that enhanced EHR decelerations on error trials seen in TD children, were absent in the medication-free ADHD group for all feedback conditions. The Mph-treated ADHD group showed 'normalised' EHR decelerations to errors and error feedback, depending on the feedback condition. This study provides further evidence for a decreased physiological responsiveness to errors and error feedback in children with ADHD and for a modulating effect of Mph. PMID:19464338

  19. High-rate error-correction codes for the optical atmospheric channel

    NASA Astrophysics Data System (ADS)

    Anguita, Jaime A.; Djordjevic, Ivan B.; Neifeld, Mark A.; Vasic, Bane V.

    2005-08-01

    We evaluate two error correction systems based on low-density parity-check (LDPC) codes for free-space optical (FSO) communication channels subject to atmospheric turbulence. We simulate the effect of turbulence on the received signal by modeling the channel with a gamma-gamma distribution. We compare the bit-error rate performance of these codes with the performance of Reed-Solomon codes of similar rate and obtain coding gains from 3 to 14 dB depending on the turbulence conditions.

  20. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  1. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  2. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  3. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  4. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  5. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  6. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  7. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  8. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  9. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  10. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  11. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  12. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  13. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  14. Development and Validation of the Controller Acceptance Rating Scale (CARS): Results of Empirical Research

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Kerns, Karol; Bone, Randall

    2001-01-01

    The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.

  15. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    PubMed

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. PMID:26922680

  16. Automatic generation control of a hydrothermal system with new area control error considering generation rate constraint

    SciTech Connect

    Das, D.; Nanda, J.; Kothari, M.L.; Kothari, D.P. )

    1990-01-01

    The paper presents an analysis of the automatic generation control based on a new area control error strategy for an interconnected hydrothermal system in the discrete-mode considering generation rate constraints (GRCs). The investigations reveal that the system dynamic performances following a step load perturbation in either of the areas with constrained optimum gain settings and unconstrained optimum gain settings are not much different, hence optimum controller settings can be achieved without considering GRCs in the mathematical model.

  17. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  18. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  19. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  20. Error rates for nanopore discrimination among cytosine, methylcytosine, and hydroxymethylcytosine along individual DNA strands.

    PubMed

    Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark

    2013-11-19

    Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context. PMID:24167260

  1. Evaluating the Type II error rate in a sediment toxicity classification using the Reference Condition Approach.

    PubMed

    Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B

    2011-01-17

    Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065

  2. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.

    2015-06-01

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  3. High-speed communication detector characterization by bit error rate measurements

    NASA Technical Reports Server (NTRS)

    Green, S. I.

    1978-01-01

    Performance data taken on several candidate high data rate laser communications photodetectors is presented. Measurements of bit error rate versus signal level were made in both a 1064 nm system at 400 Mbps and a 532 nm system at 500 Mbps. RCA silicon avalanche photodiodes are superior at 1064 nm, but the Rockwell hybrid 3-5 avalanche photodiode preamplifiers offer potentially superior performance. Varian dynamic crossed field photomultipliers are superior at 532 nm, however, the RCA silicon avalanche photodiode is a close contender.

  4. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  5. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  6. Influence of wave-front aberrations on bit error rate in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng

    2011-06-01

    We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.

  7. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  8. Performance monitoring following total sleep deprivation: effects of task type and error rate.

    PubMed

    Renn, Ryan P; Cote, Kimberly A

    2013-04-01

    There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety. PMID:23384887

  9. Children's Acceptance Ratings of a Child with a Facial Scar: The Impact of Positive Scripts

    ERIC Educational Resources Information Center

    Nabors, Laura A.; Lehmkuhl, Heather D.; Warm, Joel S.

    2004-01-01

    Children with visible pediatric conditions may be at risk for low peer acceptance. More knowledge is needed about how different types of information influence children's acceptance. For this study, we examined the influence of scripts emphasizing either positive information and/or medical information on young children's acceptance of a line…

  10. Assessment of type I error rate associated with dose-group switching in a longitudinal Alzheimer trial.

    PubMed

    Habteab Ghebretinsae, Aklilu; Molenberghs, Geert; Dmitrienko, Alex; Offen, Walt; Sethuraman, Gopalan

    2014-01-01

    In clinical trials, there always is the possibility to use data-driven adaptation at the end of a study. There prevails, however, concern on whether the type I error rate of the trial could be inflated with such design, thus, necessitating multiplicity adjustment. In this project, a simulation experiment was set up to assess type I error rate inflation associated with switching dose group as a function of dropout rate at the end of the study, where the primary analysis is in terms of a longitudinal outcome. This simulation is inspired by a clinical trial in Alzheimer's disease. The type I error rate was assessed under a number of scenarios, in terms of differing correlations between efficacy and tolerance, different missingness mechanisms, and different probabilities of switching. A collection of parameter values was used to assess sensitivity of the analysis. Results from ignorable likelihood analysis show that the type I error rate with and without switching was approximately the posited error rate for the various scenarios. Under last observation carried forward (LOCF), the type I error rate blew up both with and without switching. The type I error inflation is clearly connected to the criterion used for switching. While in general switching, in a way related to the primary endpoint, may impact the type I error, this was not the case for most scenarios in the longitudinal Alzheimer trial setting under consideration, where patients are expected to worsen over time. PMID:24697817

  11. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  12. Wireless Fetal Heart Rate Monitoring in Inpatient Full-Term Pregnant Women: Testing Functionality and Acceptability

    PubMed Central

    Boatin, Adeline A.; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  13. Wireless fetal heart rate monitoring in inpatient full-term pregnant women: testing functionality and acceptability.

    PubMed

    Boatin, Adeline A; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  14. Forward error correction and spatial diversity techniques for high-data-rate MILSATCOM over a slow-fading, nuclear-disturbed channel

    NASA Astrophysics Data System (ADS)

    Paul, Heywood I.; Meader, Charles B.; Lyons, Daniel A.; Ayers, David R.

    Forward error correction (FEC) and spatial diversity techniques are considered for improving the reliability of high-data-rate military satellite communication (MILSATCOM) over a slow-fading, nuclear-disturbed channel. Slow fading, which occurs when the channel decorrelation time is much greater than the transmitted symbol interval, is characterized by deep fades and, without special precautions, long bursts of errors over high-data-rate communication links. Using the widely accepted Defense Nuclear Agency (DNA) nuclear-scintillated channel model, the authors derive performance tradeoffs among required interleaver storage, FEC, spatial diversity, and link signal-to-noise ratio for differential binary phase shift keying (DBPSK) in the slow-fading environment. Spatial diversity is found to yield impressive gains without the large memory storage and transmission relay requirements associated with interleaving.

  15. Symbol error rate bound of DPSK modulation system in directional wave propagation

    NASA Astrophysics Data System (ADS)

    Hua, Jingyu; Zhuang, Changfei; Zhao, Xiaomin; Li, Gang; Meng, Qingmin

    This paper presents a new approach to determine the symbol error rate (SER) bound of differential phase shift keying (DPSK) systems in a directional fading channel, where the von Mises distribution is used to illustrate the non-isotropic angle of arrival (AOA). Our approach relies on the closed-form expression of the phase difference probability density function (pdf) in coherent fading channels and leads to expressions of the DPSK SER bound involving a single finite-range integral which can be readily evaluated numerically. Moreover, the simulation yields results consistent with numerical computation.

  16. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  17. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  18. Design and Verification of an FPGA-based Bit Error Rate Tester

    NASA Astrophysics Data System (ADS)

    Xiang, Annie; Gong, Datao; Hou, Suen; Liu, Chonghan; Liang, Futian; Liu, Tiankuan; Su, Da-Shung; Teng, Ping-Kun; Ye, Jingbo

    Bit error rate (BER) is the principle measure of performance of a data transmission link. With the integration of high-speed transceivers inside a field programmable gate array (FPGA), the BER testing can now be handled by transceiver-enabled FPGA hardware. This provides a cheaper alternative to dedicated table-top equipment and offers the flexibility of test customization and data analysis. This paper presents a BER tester implementation based on the Altera Stratix II GX and IV GT development boards. The architecture of the tester is described. Lab test results and field test data analysis are discussed. The Stratix II GX tester operates at up to 5 Gbps and the Stratix IV GT tester operates at up to 10 Gbps, both in 4 duplex channels. The tester deploys a pseudo random bit sequence (PRBS) generator and detector, a transceiver controller, and an error logger. It also includes a computer interface for data acquisition and user configuration. The tester's functionality was validated and its performance characterized in a point-to-point serial optical link setup. BER vs. optical receiver sensitivity was measured to emulate stressed link conditions. The Stratix II GX tester was also used in a proton test on a custom designed serializer chip to record and analyse radiation-induced errors.

  19. Design of bit error rate tester based on a high speed bit and sequence synchronization

    NASA Astrophysics Data System (ADS)

    Wang, Xuanmin; Zhao, Xiangmo; Zhang, Lichuan; Zhang, Yinglong

    2013-03-01

    In traditional BER (Bit Error Rate) tester, bit synchronization applied digital PLL and sequence synchronization utilized sequence's correlation.It resulted in a low speed on bit and sequence synchronization. this paper came up new method to realize bit and sequence synchronization .which were Bit-edge-tracking method and Immitting-sequence method.The BER tester based on FPGA was designed.The functions of inserting error-bit and removing the false sequence synchronization were added. The results of Debuging and simulating display that the time to realize bit synchronization is less than a bit width, the lagged time of the tracking bit pulse is 1/8 of the code cycle,and there is only a M sequence's cycle to realize sequence synchronization.This new BER tester has many advantages,such as a short time to realize bit and sequence synchronization,no false sequence synchronization,testing the ability of the receiving port's error -correcting and a simple hareware.

  20. Relationships of consumer sensory ratings, marbling score, and shear force value to consumer acceptance of beef strip loin steaks.

    PubMed

    Platter, W J; Tatum, J D; Belk, K E; Chapman, P L; Scanga, J A; Smith, G C

    2003-11-01

    Logistic regression was used to quantify and characterize the effects of changes in marbling score, Warner-Bratzler shear force (WBSF), and consumer panel sensory ratings for tenderness, juiciness, or flavor on the probability of overall consumer acceptance of strip loin steaks from beef carcasses (n = 550). Consumers (n = 489) evaluated steaks for tenderness, juiciness, and flavor using nine-point hedonic scales (1 = like extremely and 9 = dislike extremely) and for overall steak acceptance (satisfied or not satisfied). Predicted acceptance of steaks by consumers was high (> 85%) when the mean consumer sensory rating for tenderness,juiciness, or flavor for a steak was 3 or lower on the hedonic scale. Conversely, predicted consumer acceptance of steaks was low (< or = 10%) when the mean consumer rating for tenderness, juiciness, or flavor for a steak was 5 or higher on the hedonic scale. As mean consumer sensory ratings for tenderness, juiciness, or flavor decreased from 3 to 5, the probability of acceptance of steaks by consumers diminished rapidly in a linear fashion. These results suggest that small changes in consumer sensory ratings for these sensory traits have dramatic effects on the probability of acceptance of steaks by consumers. Marbling score displayed a weak (adjusted R2 = 0.053), yet significant (P < 0.01), relationship to acceptance of steaks by consumers, and the shape of the predicted probability curve for steak acceptance was approximately linear over the entire range of marbling scores (Traces67 to Slightly Abundant97), suggesting that the likelihood of consumer acceptance of steaks increases approximately 10% for each full marbling score increase between Slight to Slightly Abundant. The predicted probability curve for consumer acceptance of steaks was sigmoidal for the WBSF model, with a steep decline in predicted probability of acceptance as WBSF values increased from 3.0 to 5.5 kg. Changes in WBSF within the high (> 5.5 kg) or low (< 3.0 kg

  1. Influence of UAS Pilot Communication and Execution Delay on Controller's Acceptability Ratings of UAS-ATC Interactions

    NASA Technical Reports Server (NTRS)

    Vu, Kim-Phuong L.; Morales, Gregory; Chiappe, Dan; Strybel, Thomas Z.; Battiste, Vernol; Shively, Jay; Buker, Timothy J

    2013-01-01

    Successful integration of UAS in the NAS will require that UAS interactions with the air traffic management system be similar to interactions between manned aircraft and air traffic management. For example, UAS response times to air traffic controller (ATCo) clearances should be equivalent to those that are currently found to be acceptable with manned aircraft. Prior studies have examined communication delays with manned aircraft. Unfortunately, there is no analogous body of research for UAS. The goal of the present study was to determine how UAS pilot communication and execution delays affect ATCos' acceptability ratings of UAS pilot responses when the UAS is operating in the NAS. Eight radar-certified controllers managed traffic in a modified ZLA sector with one UAS flying in it. In separate scenarios, the UAS pilot verbal communication and execution delays were either short (1.5 s) or long (5 s) and either constant or variable. The ATCo acceptability of UAS pilot communication and execution delays were measured subjectively via post trial ratings. UAS verbal pilot communication delay, were rated as acceptable 92% of the time when the delay was short. This acceptability level decreased to 64% when the delay was long. UAS pilot execution delay had less of an influence on ATCo acceptability ratings in the present stimulation. Implications of these findings for UAS in the NAS integration are discussed.

  2. High rates of phasing errors in highly polymorphic species with low levels of linkage disequilibrium.

    PubMed

    Bukowicki, Marek; Franssen, Susanne U; Schlötterer, Christian

    2016-07-01

    Short read sequencing of diploid individuals does not permit the direct inference of the sequence on each of the two homologous chromosomes. Although various phasing software packages exist, they were primarily tailored for and tested on human data, which differ from other species in factors that influence phasing, such as SNP density, amounts of linkage disequilibrium (LD) and sample sizes. Despite becoming increasingly popular for other species, the reliability of phasing in non-human data has not been evaluated to a sufficient extent. We scrutinized the phasing accuracy for Drosophila melanogaster, a species with high polymorphism levels and reduced LD relative to humans. We phased two D. melanogaster populations and compared the results to the known haplotypes. The performance increased with size of the reference panel and was highest when the reference panel and phased individuals were from the same population. Full genomic SNP data and inclusion of sequence read information also improved phasing. Despite humans and Drosophila having similar switch error rates between polymorphic sites, the distances between switch errors were much shorter in Drosophila with only fragments <300-1500 bp being correctly phased with ≥95% confidence. This suggests that the higher SNP density cannot compensate for the higher recombination rate in D. melanogaster. Furthermore, we show that populations that have gone through demographic events such as bottlenecks can be phased with higher accuracy. Our results highlight that statistically phased data are particularly error prone in species with large population sizes or populations lacking suitable reference panels. PMID:26929272

  3. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  4. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  5. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  6. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies. PMID:26946083

  7. Impact of Treatment Efficacy and Professional Affiliation on Ratings of Treatment Acceptability.

    ERIC Educational Resources Information Center

    Spreat, Scott; Walsh, Denise E.

    1994-01-01

    Vignette methodology was used to assess acceptability of behavior modification programs by 198 members of the American Association on Mental Retardation. The strongest predictor of treatment acceptability was the respondents' own estimates of probable treatment success. Secondary predictors included the restrictiveness of the proposed procedure…

  8. Anti-saccade error rates as a measure of attentional bias in cocaine dependent subjects.

    PubMed

    Dias, Nadeeka R; Schmitz, Joy M; Rathnayaka, Nuvan; Red, Stuart D; Sereno, Anne B; Moeller, F Gerard; Lane, Scott D

    2015-10-01

    Cocaine-dependent (CD) subjects show attentional bias toward cocaine-related cues, and this form of cue-reactivity may be predictive of craving and relapse. Attentional bias has previously been assessed by models that present drug-relevant stimuli and measure physiological and behavioral reactivity (often reaction time). Studies of several CNS diseases outside of substance use disorders consistently report anti-saccade deficits, suggesting a compromise in the interplay between higher-order cortical processes in voluntary eye control (i.e., anti-saccades) and reflexive saccades driven more by involuntary midbrain perceptual input (i.e., pro-saccades). Here, we describe a novel attentional-bias task developed by using measurements of saccadic eye movements in the presence of cocaine-specific stimuli, combining previously unique research domains to capitalize on their respective experimental and conceptual strengths. CD subjects (N = 46) and healthy controls (N = 41) were tested on blocks of pro-saccade and anti-saccade trials featuring cocaine and neutral stimuli (pictures). Analyses of eye-movement data indicated (1) greater overall anti-saccade errors in the CD group; (2) greater attentional bias in CD subjects as measured by anti-saccade errors to cocaine-specific (relative to neutral) stimuli; and (3) no differences in pro-saccade error rates. Attentional bias was correlated with scores on the obsessive-compulsive cocaine scale. The results demonstrate increased saliency and differential attentional to cocaine cues by the CD group. The assay provides a sensitive index of saccadic (visual inhibitory) control, a specific index of attentional bias to drug-relevant cues, and preliminary insight into the visual circuitry that may contribute to drug-specific cue reactivity. PMID:26164486

  9. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  10. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Astrophysics Data System (ADS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-09-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  11. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    SciTech Connect

    Zamanali, J.H. ); Hubbard, F.R. ); Mosleh, A. ); Waller, M.A. )

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition.

  12. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  13. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  14. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  15. An Examination of Three Texas High Schools' Restructuring Strategies that Resulted in an Academically Acceptable Rating

    ERIC Educational Resources Information Center

    Massey Fields, Chamara

    2011-01-01

    This study examined three high schools in a large urban school district in Texas that achieved an academically acceptable rating after being sanctioned to reconstitute by state agencies. Texas state accountability standards are a result of the No Child Left Behind Act of 2011 (NCLB). Texas state law requires schools to design a reconstitution plan…

  16. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R., IV; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  17. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  18. Exact error rate analysis of free-space optical communications with spatial diversity over Gamma-Gamma atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin

    2016-02-01

    The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.

  19. Effect of automated drug distribution systems on medication error rates in a short-stay geriatric unit

    PubMed Central

    Cousein, Etienne; Mareville, Julie; Lerooy, Alexandre; Caillau, Antoine; Labreuche, Julien; Dambre, Delphine; Odou, Pascal; Bonte, Jean-Paul; Puisieux, François; Decaudin, Bertrand; Coupé, Patrick

    2014-01-01

    Rationale, aims and objectives To assess the impact of an automated drug distribution system on medication errors (MEs). Methods Before-after observational study in a 40-bed short stay geriatric unit within a 1800 bed general hospital in Valenciennes, France. Researchers attended nurse medication administration rounds and compared administered to prescribed drugs, before and after the drug distribution system changed from a ward stock system (WSS) to a unit dose dispensing system (UDDS), integrating a unit dose dispensing robot and automated medication dispensing cabinet (AMDC). Results A total of 615 opportunities of errors (OEs) were observed among 148 patients treated during the WSS period, and 783 OEs were observed among 166 patients treated during the UDDS period. ME [medication administration error (MAE)] rates were calculated and compared between the two periods. Secondary measures included type of errors, seriousness of errors and risk reduction for the patients. The implementation of an automated drug dispensing system resulted in a 53% reduction in MAEs. All error types were reduced in the UDDS period compared with the WSS period (P < 0.001). Wrong dose and wrong drug errors were reduced by 79.1% (2.4% versus 0.5%, P = 0.005) and 93.7% (1.9% versus 0.01%, P = 0.009), respectively. Conclusion An automated UDDS combining a unit dose dispensing robot and AMDCs could reduce discrepancies between ordered and administered drugs, thus improving medication safety among the elderly. PMID:24917185

  20. On the Power of Multiple Independent Tests when the Experimentwise Error Rate Is Controlled.

    ERIC Educational Resources Information Center

    Hsu, Louis M.

    1980-01-01

    The problem addressed is of assessing the loss of power which results from keeping the probability that at least one Type I error will occur in a family of N statistical tests at a tolerably low level. (Author/BW)

  1. Characteristics and User Acceptance of Peer Rating in EFL Writing Classrooms

    ERIC Educational Resources Information Center

    Saito, Hidetoshi; Fujita, Tomoko

    2004-01-01

    Lack of research on the characteristics of peer assessment in EFL writing may inhibit teachers from appreciating the utility of this innovative assessment. This study addressed the following research questions: (1) How similar are peer, self- and teacher ratings of EFL writing?; (2) Do students favour peer ratings?; and (3) Does peer feedback…

  2. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  3. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  4. “Missed” Mild Cognitive Impairment: High False-Negative Error Rate Based on Conventional Diagnostic Criteria

    PubMed Central

    Edmonds, Emily C.; Delano-Wood, Lisa; Jak, Amy J.; Galasko, Douglas R.; Salmon, David P.; Bondi, Mark W.

    2016-01-01

    Mild cognitive impairment (MCI) is typically diagnosed using subjective complaints, screening measures, clinical judgment, and a single memory score. Our prior work has shown that this method is highly susceptible to false-positive diagnostic errors. We examined whether the criteria also lead to “false-negative” errors by diagnostically reclassifying 520 participants using novel actuarial neuropsychological criteria. Results revealed a false-negative error rate of 7.1%. Participants’ neuropsychological performance, cerebrospinal fluid biomarkers, and rate of decline provided evidence that an MCI diagnosis is warranted. The impact of “missed” cases of MCI has direct relevance to clinical practice, research studies, and clinical trials of prodromal Alzheimer's disease. PMID:27031477

  5. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  6. The Effects of a Student Sampling Plan on Estimates of the Standard Errors for Student Passing Rates.

    ERIC Educational Resources Information Center

    Lee, Guemin; Fitzpatrick, Anne R.

    2003-01-01

    Studied three procedures for estimating the standard errors of school passing rates using a generalizability theory model and considered the effects of student sample size. Results show that procedures differ in terms of assumptions about the populations from which students were sampled, and student sample size was found to have a large effect on…

  7. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels

    PubMed Central

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-01-01

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. PMID:26694878

  8. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26694878

  9. Error estimation for delta VLBI angle and angle rate measurements over baselines between a ground station and a geosynchronous orbiter

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1982-01-01

    Baselines between a ground station and a geosynchronous orbiter provide high resolution Delta VLBI data which is beyond the capability of ground-based interferometry. The effects of possible error sources on such Delta VLBI data for the determination of spacecraft angle and angle rate are investigated. For comparison, the effects on spacecraft-only VLBI are also studied.

  10. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  11. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  12. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets.

    PubMed

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  13. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  14. Maintaining Acceptably Low Referral Rates in TEOAE-Based Newborn Hearing Screening Programs.

    ERIC Educational Resources Information Center

    Maxon, Antonia Brancia; White, Karl R.; Culpepper, Brandt; Vohr, Betty R.

    1997-01-01

    Describes factors that can affect the referral rate for otoacoustic emission-based newborn hearing screening and discusses the screening results of 1,328 newborns screened with transient evoked otoaoustic emissions prior to hospital discharge. The youngest infants were as likely to pass as infants who were 24-27 hours old. (Author/CR)

  15. Power and Type I Error Rates for Rank-Score MANOVA Techniques.

    ERIC Educational Resources Information Center

    Pavur, Robert; Nath, Ravinder

    1989-01-01

    A Monte Carlo simulation study compared the power and Type I errors of the Wilks lambda statistic and the statistic of M. L. Puri and P. K. Sen (1971) on transformed data in a one-way multivariate analysis of variance. Preferred test procedures, based on robustness and power, are discussed. (SLD)

  16. A Comparison of Type I Error Rates of Alpha-Max with Established Multiple Comparison Procedures.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…

  17. A comparison of error detection rates between the reading aloud method and the double data entry method.

    PubMed

    Kawado, Miyuki; Hinotsu, Shiro; Matsuyama, Yutaka; Yamaguchi, Takuhiro; Hashimoto, Shuji; Ohashi, Yasuo

    2003-10-01

    Data entry and its verification are important steps in the process of data management in clinical studies. In Japan, a kind of visual comparison called the reading aloud (RA) method is often used as an alternative to or in addition to the double data entry (DDE) method. In a typical RA method, one operator reads previously keyed data aloud while looking at a printed sheet or computer screen, and another operator compares the voice with the corresponding data recorded on case report forms (CRFs) to confirm whether the data are the same. We compared the efficiency of the RA method with that of the DDE method in the data management system of the Japanese Registry of Renal Transplantation. Efficiency was evaluated in terms of error detection rate and expended time. Five hundred sixty CRFs were randomly allocated to two operators for single data entry. Two types of DDE and RA methods were performed. Single data entry errors were detected in 358 of 104,720 fields (per-field error rate=0.34%). Error detection rates were 88.3% for the DDE method performed by a different operator, 69.0% for the DDE method performed by the same operator, 59.5% for the RA method performed by a different operator, and 39.9% for the RA method performed by the same operator. The differences in these rates were significant (p<0.001) between the two verification methods as well as between the types of operator (same or different). The total expended times were 74.8 hours for the DDE method and 57.9 hours for the RA method. These results suggest that in detecting errors of single data entry, the RA method is inferior to the DDE method, while its time cost is lower. PMID:14500053

  18. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  19. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  20. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  1. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  2. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  3. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  4. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  5. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  6. Assessing XCTD Fall Rate Errors using Concurrent XCTD and CTD Profiles in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Millar, J.; Gille, S. T.; Sprintall, J.; Frants, M.

    2010-12-01

    Refinements in the fall rate equation for XCTDs are not as well understood as those for XBTs, due in part to the paucity of concurrent and collocated XCTD and CTD profiles. During February and March 2010, the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) conducted 31 collocated 1000-meter XCTD and CTD casts in the Drake Passage. These XCTD/CTD profile pairs are closely matched in space and time, with a mean distance between casts of 1.19 km and a mean lag time of 39 minutes. The profile pairs are well suited to address the XCTD fall rate problem specifically in higher latitude waters, where existing fall rate corrections have rarely been assessed. Many of these XCTD/CTD profile pairs reveal an observable depth offset in measurements of both temperature and conductivity. Here, the nature and extent of this depth offset is evaluated.

  7. Compensating inherent linear move water application errors using a variable rate irrigation system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous move irrigation systems such as linear move and center pivot irrigate unevenly when applying conventional uniform water rates due to the towers/motors stop/advance pattern. The effect of the cart movement pattern on linear move water application is larger on the first two spans which intr...

  8. An approach for reducing the error rate in automated lung segmentation.

    PubMed

    Gill, Gurman; Beichel, Reinhard R

    2016-09-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855±0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  9. Peat Accumulation in the Everglades (USA) during the Past 4000 Years: Rates, Drivers, and Sources of Error

    NASA Astrophysics Data System (ADS)

    Glaser, P. H.; Volin, J. C.; Givnish, T. J.; Hansen, B. C.; Stricker, C. A.

    2012-12-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. AMS-14C dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands

  10. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: Rates, drivers, and sources of error

    NASA Astrophysics Data System (ADS)

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-09-01

    Tropical and subtropical wetlands are considered to be globally important sources of greenhouse gases, but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida in order to assess these problems and determine the factors that could govern carbon accumulation in this large subtropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion (0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  11. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  12. Trends and weekly and seasonal cycles in the rate of errors in the clinical management of hospitalized patients.

    PubMed

    Buckley, David; Bulger, David

    2012-08-01

    Studies on the rate of adverse events in hospitalized patients seldom examine temporal patterns. This study presents evidence of both weekly and annual cycles. The study is based on a large and diverse data set, with nearly 5 yrs of data from a voluntary staff-incident reporting system of a large public health care provider in rural southeastern Australia. The data of 63 health care facilities were included, ranging from large non-metropolitan hospitals to small community and aged health care facilities. Poisson regression incorporating an observation-driven autoregressive effect using the GLARMA framework was used to explain daily error counts with respect to long-term trend and weekly and annual effects, with procedural volume as an offset. The annual pattern was modeled using a first-order sinusoidal effect. The rate of errors reported demonstrated an increasing annual trend of 13.4% (95% confidence interval [CI] 10.6% to 16.3%); however, this trend was only significant for errors of minor or no harm to the patient. A strong "weekend effect" was observed. The incident rate ratio for the weekend versus weekdays was 2.74 (95% CI 2.55 to 2.93). The weekly pattern was consistent for incidents of all levels of severity, but it was more pronounced for less severe incidents. There was an annual cycle in the rate of incidents, the number of incidents peaking in October, on the 282 nd day of the year (spring in Australia), with an incident rate ratio 1.09 (95% CI 1.05 to 1.14) compared to the annual mean. There was no so-called "killing season" or "July effect," as the peak in incident rate was not related to the commencement of work by new medical school graduates. The major finding of this study is the rate of adverse events is greater on weekends and during spring. The annual pattern appears to be unrelated to the commencement of new graduates and potentially results from seasonal variation in the case mix of patients or the health of the medical workforce that alters

  13. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  14. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  15. Electron-accepting potential of solvents determines photolysis rates of polycyclic aromatic hydrocarbons: experimental and density functional theory study.

    PubMed

    Shao, Jianping; Chen, Jingwen; Xie, Qing; Wang, Ying; Li, Xuehua; Hao, Ce

    2010-07-15

    Photochemical behaviour of polycyclic aromatic hydrocarbons (PAHs) is strongly dependent on the physical and chemical nature of the media in/on which they exist. To understand the media effects, the photolysis of phenanthrene (PHE) and benzo[a]pyrene (BaP) in several solvents was investigated. Distinct photolysis rate constants for PHE and BaP in the different solvents were observed. Some theoretical parameters reflecting the solvent properties were computed and employed to explain the solvent effects. Acetone competitively absorbed light with PHE and BaP, and the excited acetone molecules played different roles for the photodegradation of PHE and BaP. The photolysis rate constants of PHE and BaP in hexane, isopropanol, ethanol, methanol, acetonitrile and dichloromethane were observed to correlate with the electron-accepting potential of the solvent molecules. Absolute electronegativity of the solvents linearly correlated with the photolytic activity (log k) of the PAHs significantly. The results are important for better understanding the photodegradation mechanism of PAHs in different media. PMID:20303660

  16. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage.

    PubMed

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M; Espinar, Lorena; Blevins, William R; Lehner, Ben; Carey, Lucas B

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method--FitFlow--that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic--and so phenotypic--space. PMID:26268986

  17. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  18. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  19. Investigation on the bit error rate performance of 40Gb/s space optical communication system based on BPSK scheme

    NASA Astrophysics Data System (ADS)

    Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie

    2015-08-01

    Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.

  20. Unacceptably High Error Rates in Vitek 2 Testing of Cefepime Susceptibility in Extended-Spectrum-β-Lactamase-Producing Escherichia coli

    PubMed Central

    Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao

    2014-01-01

    While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253

  1. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay. PMID:24760932

  2. Evaluation of soft error rates using nuclear probes in bulk and SOI SRAMs with a technology node of 90 nm

    NASA Astrophysics Data System (ADS)

    Abo, Satoshi; Masuda, Naoyuki; Wakaya, Fujio; Onoda, Shinobu; Hirao, Toshio; Ohshima, Takeshi; Iwamatsu, Toshiaki; Takai, Mikio

    2010-06-01

    The difference of soft error rates (SERs) in conventional bulk Si and silicon-on-insulator (SOI) static random access memories (SRAMs) with a technology node of 90 nm has been investigated by helium ion probes with energies ranging from 0.8 to 6.0 MeV and a dose of 75 ions/μm 2. The SERs in the SOI SRAM were also investigated by oxygen ion probes with energies ranging from 9.0 to 18.0 MeV and doses of 0.14-0.76 ions/μm 2. The soft error in the bulk and SOI SRAMs occurred by helium ion irradiation with energies at and above 1.95 and 2.10 MeV, respectively. The SER in the bulk SRAM saturated with ion energies at and above 2.5 MeV. The SER in the SOI SRAM became the highest by helium ion irradiation at 2.5 MeV and drastically decreased with increasing the ion energies above 2.5 MeV, in which helium ions at this energy range generated the maximum amount of excess charge carriers in a SOI body. The soft errors occurred by helium ions were induced by a floating body effect due to generated excess charge carriers in the channel regions. The soft error occurred by oxygen ion irradiation with energies at and above 10.5 MeV in the SOI SRAM. The SER in the SOI SRAM gradually increased with energies from 10.5 to 13.5 MeV and saturated at 18 MeV, in which the amount of charge carriers induced by oxygen ions in this energy range gradually increased. The computer calculation indicated that the oxygen ions with energies above 13.0 MeV generated more excess charge carriers than the critical charge of the 90 nm node SOI SRAM with the designed over-layer thickness. The soft errors, occurred by oxygen ions with energies at and below 12.5 MeV, were induced by a floating body effect due to the generated excess charge carriers in the channel regions and those with energies at and above 13.0 MeV were induced by both the floating body effect and generated excess carriers. The difference of the threshold energy of the oxygen ions between the experiment and the computer calculation might

  3. Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study

    PubMed Central

    Westbrook, Johanna I.; Reckmann, Margaret; Li, Ling; Runciman, William B.; Burke, Rosemary; Lo, Connie; Baysari, Melissa T.; Braithwaite, Jeffrey; Day, Richard O.

    2012-01-01

    Background Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. Methods and Results We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0

  4. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  5. Social Acceptance; A Possible Mediator in the Association between Socio-Economic Deprivation and Under-18 Pregnancy Rates?

    ERIC Educational Resources Information Center

    Smith, Debbie Michelle; Roberts, Ron

    2009-01-01

    This study examines the social acceptance of young (under-18) pregnancy by assessing people's acceptance of young pregnancy and abortion in relation to deprivation. A cross-sectional survey design was conducted in two relatively affluent and two relatively deprived local authorities in London (n=570). Contrary to previous findings, participants…

  6. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  7. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  8. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

    NASA Astrophysics Data System (ADS)

    Li, Zexian; Latva-aho, Matti

    2004-12-01

    Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  9. Effect of media property variations on shingled magnetic recording channel bit error rate and signal to noise ratio performance

    NASA Astrophysics Data System (ADS)

    Lin, Maria Yu; Teo, Kim Keng; Chan, Kheong Sann

    2015-05-01

    Shingled Magnetic Recording (SMR) is an upcoming technology to see the hard disk drive industry over until heat assisted magnetic recording or another technology matures. In this work, we study the impact of variations in media parameters on the raw channel bit error rate (BER) through micromagnetic simulations and the grain flipping probability channel model in the SMR situation. This study aims to provide feedback to media designers on how media property variations influence the SMR channel performance. In particular, we analyse the effect of variations in the anisotropy constant (Ku), saturation magnetization (Ms), easy axis (ez), grain size (gs), and exchange coupling (Ax), on the written micromagnetic output and the ensuing hysteresis loop. We also compare these analyses with the channel performance on signal to noise ratio (SNR) and the raw channel BER.

  10. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  11. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  12. Bit-error-rate performance of non-line-of-sight UV transmission with spatial diversity reception.

    PubMed

    Xiao, Houfei; Zuo, Yong; Wu, Jian; Li, Yan; Lin, Jintong

    2012-10-01

    In non-line-of-sight (NLOS) UV communication links using intensity modulation with direct detection, atmospheric turbulence-induced intensity fluctuations can significantly impair link performance. To mitigate turbulence-induced fading and, therefore, to improve the bit error rate (BER) performance, spatial diversity reception can be used over NLOS UV links, which involves the deployment of multiple receivers. The maximum-likelihood (ML) spatial diversity scheme is derived for spatially correlated NLOS UV links, and the influence of various fading correlation at different receivers on the BER performance is investigated. For the dual-receiver case, ML diversity detection is compared with equal gain combining and optimal combining schemes under different turbulence intensity conditions. PMID:23027306

  13. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  14. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  15. Detecting Glaucoma Progression From Localized Rates of Retinal Changes in Parametric and Nonparametric Statistical Framework With Type I Error Control

    PubMed Central

    Balasubramanian, Madhusudhanan; Arias-Castro, Ery; Medeiros, Felipe A.; Kriegman, David J.; Bowd, Christopher; Weinreb, Robert N.; Holst, Michael; Sample, Pamela A.; Zangwill, Linda M.

    2014-01-01

    Purpose. We evaluated three new pixelwise rates of retinal height changes (PixR) strategies to reduce false-positive errors while detecting glaucomatous progression. Methods. Diagnostic accuracy of nonparametric PixR-NP cluster test (CT), PixR-NP single threshold test (STT), and parametric PixR-P STT were compared to statistic image mapping (SIM) using the Heidelberg Retina Tomograph. We included 36 progressing eyes, 210 nonprogressing patient eyes, and 21 longitudinal normal eyes from the University of California, San Diego (UCSD) Diagnostic Innovations in Glaucoma Study. Multiple comparison problem due to simultaneous testing of retinal locations was addressed in PixR-NP CT by controlling family-wise error rate (FWER) and in STT methods by Lehmann-Romano's k-FWER. For STT methods, progression was defined as an observed progression rate (ratio of number of pixels with significant rate of decrease; i.e., red-pixels, to disk size) > 2.5%. Progression criterion for CT and SIM methods was presence of one or more significant (P < 1%) red-pixel clusters within disk. Results. Specificity in normals: CT = 81% (90%), PixR-NP STT = 90%, PixR-P STT = 90%, SIM = 90%. Sensitivity in progressing eyes: CT = 86% (86%), PixR-NP STT = 75%, PixR-P STT = 81%, SIM = 39%. Specificity in nonprogressing patient eyes: CT = 49% (55%), PixR-NP STT = 56%, PixR-P STT = 50%, SIM = 79%. Progression detected by PixR in nonprogressing patient eyes was associated with early signs of visual field change that did not yet meet our definition of glaucomatous progression. Conclusions. The PixR provided higher sensitivity in progressing eyes and similar specificity in normals than SIM, suggesting that PixR strategies can improve our ability to detect glaucomatous progression. Longer follow-up is necessary to determine whether nonprogressing eyes identified as progressing by these methods will develop glaucomatous progression. (ClinicalTrials.gov number, NCT00221897.) PMID:24519427

  16. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  17. Ultra-Short-Term Heart Rate Variability Indexes at Rest and Post-Exercise in Athletes: Evaluating the Agreement with Accepted Recommendations

    PubMed Central

    Esco, Michael R.; Flatt, Andrew A.

    2014-01-01

    The purpose of this study was to evaluate the agreement of the vagal-related heart rate variability index, log-transformed root mean square of successive R-R intervals (lnRMSSD), measured under ultra-short-term conditions (< 60 seconds) with conventional longer term recordings of 5 minutes in collegiate athletes under resting and post-exercise conditions. Electrocardiographic readings were collected from twenty-three athletes within 5-minute segments at rest and at 25-30 minutes of supine recovery following a maximal exercise test. From each 5-minute segment, lnRMSSD was recorded as the criterion measure. Within each 5-minute segment, lnRMSSD was also determined from randomly selected ultra-short-term segments of 10-, 30-, and 60-seconds in length, which were compared to the criterion. When compared to the criterion measures, the significant intraclass correlation (from 0.98 to 0.81, p < 0.05) and typical error (from 0.11 to 0.34) increased as ultra-short-term measurement duration decreased (i.e., from 60 seconds to 10 seconds). In addition, the limits of agreement (Bias ± 1.98 SD) increased as ultra-short-term lnRMSSD duration decreased as follows: 0.00 ± 0.22 ms, -0.07 ± 0.41 ms, -0.20 ± 0.94 ms for the 60-, 30-, and 10-second pre-exercise segments, respectively, and -0.15 ± 0.39 ms, -0.14 ± 0.53 ms, -0.12 ± 0.76 ms for the 60-, 30-, and 10-second post-exercise segments, respectively. This study demonstrated that as ultra-short-term measurement duration decreased from 60 seconds to 10 seconds, the agreement to the criterion decreased. Therefore, 60 seconds appears to be an acceptable recording time for lnRMSSD data collection in collegiate athletes. Key Points The log-transformed root mean square of successive R-R intervals (lnRMSSD) is a vagal-related heart rate variability index that has become a promising method for monitoring individual adaptation to training when measured during resting or post-exercise conditions. This study demonstrated that ln

  18. THE INFLUENCE OF SEASON AND VOLATILE COMPOUNDS ON ACCEPTANCE RATES OF INTRODUCED EUROPEAN HONEY BEE (APIS MELLIFERA L.) QUEENS INTO EUROPEAN AND AFRICANIZED COLONIES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We introduced mated European honey bee (Apis mellifera L.) queens into Africanized and European colonies during three different seasons to determine if there were differences in queen acceptance rates. We also sampled volatile compounds emitted by the queens prior to their introduction to determine...

  19. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  20. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PMID:24773354

  1. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    NASA Astrophysics Data System (ADS)

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  2. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  3. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  4. Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel

    NASA Technical Reports Server (NTRS)

    Liu, Chia-Liang; Feher, Kamilo

    1991-01-01

    The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.

  5. Improvement of bit error rate and page alignment in the holographic data storage system by using the structural similarity method.

    PubMed

    Chen, Yu-Ta; Ou-Yang, Mang; Lee, Cheng-Chung

    2012-06-01

    Although widely recognized as a promising candidate for the next generation of data storage devices, holographic data storage systems (HDSS) incur adverse effects such as noise, misalignment, and aberration. Therefore, based on the structural similarity (SSIM) concept, this work presents a more accurate locating approach than the gray level weighting method (GLWM). Three case studies demonstrate the effectiveness of the proposed approach. Case 1 focuses on achieving a high performance of a Fourier lens in HDSS, Cases 2 and 3 replace the Fourier lens with a normal lens to decrease the quality of the HDSS, and Case 3 demonstrates the feasibility of a defocus system in the worst-case scenario. Moreover, the bit error rate (BER) is evaluated in several average matrices extended from the located position. Experimental results demonstrate that the proposed SSIM method renders a more accurate centering and a lower BER, lower BER of 2 dB than those of the GLWM in Cases 1 and 2, and BER of 1.5 dB in Case 3. PMID:22695607

  6. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  7. Automated measurement of the bit-error rate as a function of signal-to-noise ratio for microwave communications systems

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Daugherty, Elaine S.; Kramarchuk, Ihor

    1987-01-01

    The performance of microwave systems and components for digital data transmission can be characterized by a plot of the bit-error rate as a function of the signal to noise ratio (or E sub b/E sub o). Methods for the efficient automated measurement of bit-error rates and signal-to-noise ratios, developed at NASA Lewis Research Center, are described. Noise measurement considerations and time requirements for measurement accuracy, as well as computer control and data processing methods, are discussed.

  8. Sensory evaluation ratings and melting characteristics show that okra gum is an acceptable milk-fat ingredient substitute in chocolate frozen dairy dessert.

    PubMed

    Romanchik-Cerpovicz, Joelle E; Costantino, Amanda C; Gunn, Laura H

    2006-04-01

    Reducing dietary fat intake may lower the risk of developing coronary heart disease. This study examined the feasibility of substituting okra gum for 25%, 50%, 75%, or 100% milk fat in frozen chocolate dairy dessert. Fifty-six consumers evaluated the frozen dairy desserts using a hedonic scale. Consumers rated color, smell, texture, flavor, aftertaste, and overall acceptability characteristics of all products as acceptable. All ratings were similar among the products except for the aftertaste rating, which was significantly lower for chocolate frozen dairy dessert containing 100% milk-fat replacement with okra gum compared with the control (0% milk-fat replacement) (P<0.05). Whereas melting points of all products were similar, melting rates slowed significantly as milk-fat replacement with okra gum increased, suggesting that okra gum may increase the stability of frozen dairy desserts (P<0.05). Overall, this study shows that okra gum is an acceptable milk-fat ingredient substitute in chocolate frozen dairy dessert. PMID:16567157

  9. Estimation of hominoid ancestral population sizes under bayesian coalescent models incorporating mutation rate variation and sequencing errors.

    PubMed

    Burgess, Ralph; Yang, Ziheng

    2008-09-01

    Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species. PMID:18603620

  10. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize

  11. Performance analysis of content-addressable search and bit-error rate characteristics of a defocused volume holographic data storage system.

    PubMed

    Das, Bhargab; Joseph, Joby; Singh, Kehar

    2007-08-01

    One of the methods for smoothing the high intensity dc peak in the Fourier spectrum for reducing the reconstruction error in a Fourier transform volume holographic data storage system is to record holograms some distance away from or in front of the Fourier plane. We present the results of our investigation on the performance of such a defocused holographic data storage system in terms of bit-error rate and content search capability. We have evaluated the relevant recording geometry through numerical simulation, by obtaining the intensity distribution at the output detector plane. This has been done by studying the bit-error rate and the content search capability as a function of the aperture size and position of the recording material away from the Fourier plane. PMID:17676163

  12. Admission Rates of Student-Athletes and General Students: A Comparison of Acceptance Rates of the Student-Athlete and the General Student at DePauw University, Greencastle, Indiana State University.

    ERIC Educational Resources Information Center

    Jaworski, Brian; Gilman, David A.

    This study examined admission rates of student-athletes and students in general at DePauw University (Indiana) over a three-year period. Data on admissions from 1994 through 1996 were reviewed, and it was found that 47 percent of applicants identified themselves as student-athletes and that 82 percent of all applicants were accepted. The results…

  13. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  14. Heritability and molecular genetic basis of antisaccade eye tracking error rate: a genome-wide association study.

    PubMed

    Vaidyanathan, Uma; Malone, Stephen M; Donnelly, Jennifer M; Hammer, Micah A; Miller, Michael B; McGue, Matt; Iacono, William G

    2014-12-01

    Antisaccade deficits reflect abnormalities in executive function linked to various disorders including schizophrenia, externalizing psychopathology, and neurological conditions. We examined the genetic bases of antisaccade error in a sample of community-based twins and parents (N = 4,469). Biometric models showed that about half of the variance in the antisaccade response was due to genetic factors and half due to nonshared environmental factors. Molecular genetic analyses supported these results, showing that the heritability accounted for by common molecular genetic variants approximated biometric estimates. Genome-wide analyses revealed several SNPs as well as two genes-B3GNT7 and NCL-on Chromosome 2 associated with antisaccade error. SNPs and genes hypothesized to be associated with antisaccade error based on prior work, although generating some suggestive findings for MIR137, GRM8, and CACNG2, could not be confirmed. PMID:25387707

  15. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  16. American Recovery and Reinvestment Act of 2009. Interim Report on Customer Acceptance, Retention, and Response to Time-Based Rates from the Consumer Behavior Studies

    SciTech Connect

    Cappers, Peter; Hans, Liesel; Scheer, Richard

    2015-06-01

    Time-based rate programs1, enabled by utility investments in advanced metering infrastructure (AMI), are increasingly being considered by utilities as tools to reduce peak demand and enable customers to better manage consumption and costs. There are several customer systems that are relatively new to the marketplace and have the potential for improving the effectiveness of these programs, including in-home displays (IHDs), programmable communicating thermostats (PCTs), and web portals. Policy and decision makers are interested in more information about customer acceptance, retention, and response before moving forward with expanded deployments of AMI-enabled new rates and technologies. Under the Smart Grid Investment Grant Program (SGIG), the U.S. Department of Energy (DOE) partnered with several utilities to conduct consumer behavior studies (CBS). The goals involved applying randomized and controlled experimental designs for estimating customer responses more precisely and credibly to advance understanding of time-based rates and customer systems, and provide new information for improving program designs, implementation strategies, and evaluations. The intent was to produce more robust and credible analysis of impacts, costs, benefits, and lessons learned and assist utility and regulatory decision makers in evaluating investment opportunities involving time-based rates. To help achieve these goals, DOE developed technical guidelines to help the CBS utilities estimate customer acceptance, retention, and response more precisely.

  17. Arbuscular mycorrhizal symbiosis increases host plant acceptance and population growth rates of the two-spotted spider mite Tetranychus urticae.

    PubMed

    Hoffmann, Daniela; Vierheilig, Horst; Riegler, Petra; Schausberger, Peter

    2009-01-01

    Most terrestrial plants live in symbiosis with arbuscular mycorrhizal (AM) fungi. Studies on the direct interaction between plants and mycorrhizal fungi are numerous whereas studies on the indirect interaction between such fungi and herbivores feeding on aboveground plant parts are scarce. We studied the impact of AM symbiosis on host plant choice and life history of an acarine surface piercing-sucking herbivore, the polyphagous two-spotted spider mite Tetranychus urticae. Experiments were performed on detached leaflets taken from common bean plants (Phaseolus vulgaris) colonized or not colonized by the AM fungus Glomus mosseae. T. urticae females were subjected to choice tests between leaves from mycorrhizal and non-mycorrhizal plants. Juvenile survival and development, adult female survival, oviposition rate and offspring sex ratio were measured in order to estimate the population growth parameters of T. urticae on either substrate. Moreover, we analyzed the macro- and micronutrient concentration of the aboveground plant parts. Adult T. urticae females preferentially resided and oviposited on mycorrhizal versus non-mycorrhizal leaflets. AM symbiosis significantly decreased embryonic development time and increased the overall oviposition rate as well as the proportion of female offspring produced during peak oviposition. Altogether, the improved life history parameters resulted in significant changes in net reproductive rate, intrinsic rate of increase, doubling time and finite rate of increase. Aboveground parts of colonized plants showed higher concentrations of P and K whereas Mn and Zn were both found at lower levels. This is the first study documenting the effect of AM symbiosis on the population growth rates of a herbivore, tracking the changes in life history characteristics throughout the life cycle. We discuss the AM-plant-herbivore interaction in relation to plant quality, herbivore feeding type and site and the evolutionary implications in a multi

  18. Tradeoff between no-call reduction in genotyping error rate and loss of sample size for genetic case/control association studies.

    PubMed

    Kang, S J; Gordon, D; Brown, A M; Ott, J; Finch, S J

    2004-01-01

    Single nucleotide polymorphisms (SNP) may be genotyped for use in case-control designs to test for association between a SNP marker and a disease using a 2 x 3 chi-squared test of independence. Genotyping is often based on underlying continuous measurements, which are classified into genotypes. A "no-call" procedure is sometimes used in which borderline observations are not classified. This procedure has the simultaneous effect of reducing the genotype error rate and the expected number of genotypes observed. Both quantities affect the power of the statistic. We develop methods for calculating the genotype error rate, the expected number of genotypes observed, and the expected power of the resulting test as a function of the no-call procedure. We examine the statistical properties of the chi-squared test using a no-call procedure when the underlying continuous measure of genotype classification is a three-component mixture of univariate normal distributions under a range of parameter specifications. The genotype error rate decreases as the no-call region is increased. The expected number of observations genotyped also decreases. Our key finding is that the expected power of the chi-squared test is not sensitive to the no-call procedure. That is, the benefits of reduced genotype error rate are almost exactly balanced by the losses due to reduced genotype observations. For an underlying univariate normal mixture of genotype classification to be analyzed with a 2 x 3 chi-squared test, there is little, if any, increase in power using a no-call procedure. PMID:14992497

  19. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  20. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  1. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  2. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    NASA Astrophysics Data System (ADS)

    Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu

    2011-05-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.

  3. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  4. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  5. Reliability of perceived neighborhood conditions and the effects of measurement error on self-rated health across urban and rural neighborhoods

    PubMed Central

    Pruitt, Sandi L.; Jeffe, Donna B.; Yan, Yan; Schootman, Mario

    2011-01-01

    Background Limited psychometric research has examined the reliability of self-reported measures of neighborhood conditions, the effect of measurement error on associations between neighborhood conditions and health, and potential differences in the reliabilities between neighborhood strata(urban vs. rural and low vs. high poverty). We assessed overall and stratified reliability of self-reported perceived neighborhood conditions using 5 scales (Social and Physical Disorder, Social Control, Social Cohesion, Fear) and 4 single items (Multidimensional Neighboring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Methods Using random-digit dialing, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2–3 weeks apart. We assessed test-retest (intraclass correlation coefficients [ICC]/weighted kappa [k]) and internal consistency reliability (Cronbach’sα). Differences in reliability across neighborhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. Results All measures demonstrated satisfactory internal consistency (α≥.70) and either moderate (ICC/k=.41–.60) or substantial (ICC/k=.61–.80) test-retest reliability in the full sample. Internal consistency did not differ by neighborhood strata. Test-retest reliability was significantly lower among rural (vs. urban) residents for 2 scales (Social Control, Physical Disorder) and 2 Multidimensional Neighboring items; test-retest reliability was higher for Physical Disorder and lower for 1 item Multidimensional Neighboring item among the high (vs. low) poverty strata. After measurement error correction, the magnitude of associations between neighborhood conditions and self-rated health were larger, particularly in the rural population. Conclusion Research is needed to develop and test reliable measures of perceived neighborhood conditions relevant to the health

  6. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  7. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  8. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  9. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    PubMed

    Pightling, Arthur W; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  10. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you. PMID:12345479

  11. Bit error rate optimization of an acousto-optic tracking system for free-space laser communications

    NASA Astrophysics Data System (ADS)

    Sofka, J.; Nikulin, V.

    2006-02-01

    Optical communications systems have been gaining momentum with the increasing demand for transmission bandwidth in the last several years. Optical cable based solutions have become an attractive alternative to copper based system in the most bandwidth demanding applications due to increased bandwidth and longer inter-repeater distances. The promise of similar benefits over radio communications systems is driving the research into free space laser communications. Along with increased communications bandwidth, a free space laser communications system offers lower power consumption and the possibility for covert data links due to the concentration of the energy of the laser into a narrow beam. A narrow beam, however, results in a requirement for much more accurate and agile steering, so that a data link can be maintained in a scenario of communication platforms in relative motion or in the presence of vibrations. This paper presents a laser beam tracking system employing an acousto-optic cell capable of deflecting a laser beam at a very high rate (order of tens of kHz). The tracking system is subjected to vibrations to simulate a realistic implementation, resulting in the increase of BER. The performance of the system can be significantly improved through digital control. A constant gain controller is complemented by a Kalman filter the parameters of which are optimized to achieve the lowest possible BER for a given vibrations spectrum.

  12. Effects of box size, frequency of lifting, and height of lift on maximum acceptable weight of lift and heart rate for male university students in Iran

    PubMed Central

    Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun

    2015-01-01

    Introduction In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. Methods This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. Results The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey’s post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey’s post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p

  13. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  14. The cost-effectiveness and consumer acceptability of taxation strategies to reduce rates of overweight and obesity among children in Australia: study protocol

    PubMed Central

    2013-01-01

    Background Childhood obesity is a recognised public health problem and around 25% of Australian children are overweight or obese. A major contributor is the obesogenic environment which encourages over consumption of energy dense nutrient poor food. Taxation is commonly proposed as a mechanism to reduce consumption of poor food choices and hence reduce rates of obesity and overweight in the community. Methods/Design An economic model will be developed to assess the lifetime benefits and costs to a cohort of Australian children by reducing energy dense nutrient poor food consumption through taxation mechanisms. The model inputs will be derived from a series of smaller studies. Food options for taxation will be derived from literature and expert opinion, the acceptability and impact of price changes will be explored through a Citizen’s Jury and a discrete choice experiment and price elasticities will be derived from the discrete choice experiment and consumption data. Discussion The health care costs of managing rising levels of obesity are a challenge for all governments. This study will provide a unique contribution to the international knowledge base by engaging a variety of robust research techniques, with a multidisciplinary focus and be responsive to consumers from diverse socio-economic backgrounds. PMID:24330325

  15. Improving the Response Rate to a Street Survey: An Evaluation of the "But You Are Free to Accept or to Refuse" Technique.

    ERIC Educational Resources Information Center

    Gueguen, Nicolas; Pascual, Alexandre

    2005-01-01

    The "but you are free to accept or to refuse" technique is a compliance procedure in which someone is approached with a request by simply telling him/her that he/she is free to accept or to refuse the request. This semantic evocation leads to increased compliance with the request. Furthermore, in most of the studies in which this technique was…

  16. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  17. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  18. Noncompliance pattern due to medication errors at a Teaching Hospital in Srikot, India

    PubMed Central

    Thakur, Heenopama; Thawani, Vijay; Raina, Rangeel Singh; Kothiyal, Gitanjali; Chakarabarty, Mrinmoy

    2013-01-01

    Objective: To study the medication errors leading to noncompliance in a tertiary care teaching hospital. Materials and Methods: This study was conducted in a tertiary care hospital of a teaching institution from Srikot, Garhwal, Uttarakhand to analyze the medication errors in 500 indoor prescriptions from medicine, surgery, obstetrics and gynecology, pediatrics and ENT departments over five months and 100 outdoor patients of medicine department. Results: Medication error rate for indoor patients was found to be 22.4 % and 11.4% for outdoor patients as against the standard acceptable error rate 3%. Maximum errors were observed in the indoor prescriptions of the surgery department accounting for 44 errors followed by medicine 32 and gynecology 25 in the 500 cases studied leading to faulty administration of medicines. Conclusion: Many medication errors were noted which go against the practice of rational therapeutics. Such studies can be directed to usher in the rational use of medicines for increasing compliance and therapeutic benefits. PMID:23833376

  19. Comparison on the sensitivity of fiber optic SONET OC-48 PIN-TIA receivers measured by using synchronous modulation intermixing technique and bit-error-rate tester

    NASA Astrophysics Data System (ADS)

    Lin, Gong-Ru; Liao, Yu-Sheng

    2004-04-01

    The sensitivity of SONET p-i-n photodiode receivers with transimpedance amplifier (PIN-TIA) from OC-3 to OC-48 data rates measured by using a standard bit-error-rate tester (BERT) and a novel synchronous-modulation inter-mixing (SMIM) technique are compared. A threshold inter-mixed voltage of below 15.8 mV obtained by SMIM method corresponding to the sensitivity of PIN-TIA receiver beyond -32 dBm determined by BERT for the SONET OC-48 PIN-TIA receivers with a required BER of better than 10-10 is reported. the analysis interprets that the inter-mixed voltage for improving the PIN-TIA receiver sensitivity from -31 dBm to -33 dBm has to be increased from 12.5 mV to 20.4 mV. As compared to the BERT, the SMIM is a relatively simplified and low-cost technique for on-line mass-production diagnostics for measureing the sensitivity and evaluationg the BER performances of PIN-TIA receivers.

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  1. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  2. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  3. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  4. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  5. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    SciTech Connect

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.; Heindl, Ranko

    2014-05-26

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power present in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.

  6. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  7. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  8. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  9. Acceptance speech.

    PubMed

    Carpenter, M

    1994-01-01

    In Bangladesh, the assistant administrator of USAID gave an acceptance speech at an awards ceremony on the occasion of the 25th anniversary of oral rehydration solution (ORS). The ceremony celebrated the key role of the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) in the discovery of ORS. Its research activities over the last 25 years have brought ORS to every village in the world, preventing more than a million deaths each year. ORS is the most important medical advance of the 20th century. It is affordable and client-oriented, a true appropriate technology. USAID has provided more than US$ 40 million to ICDDR,B for diarrheal disease and measles research, urban and rural applied family planning and maternal and child health research, and vaccine development. ICDDR,B began as the relatively small Cholera Research Laboratory and has grown into an acclaimed international center for health, family planning, and population research. It leads the world in diarrheal disease research. ICDDR,B is the leading center for applied health research in South Asia. It trains public health specialists from around the world. The government of Bangladesh and the international donor community have actively joined in support of ICDDR,B. The government applies the results of ICDDR,B research to its programs to improve the health and well-being of Bangladeshis. ICDDR,B now also studies acute respiratory diseases and measles. Population and health comprise 1 of USAID's 4 strategic priorities, the others being economic growth, environment, and democracy, USAID promotes people's participation in these 4 areas and in the design and implementation of development projects. USAID is committed to the use and improvement of ORS and to complementary strategies that further reduce diarrhea-related deaths. Continued collaboration with a strong user perspective and integrated services will lead to sustainable development. PMID:12345470

  10. Design and Demonstration of a 4×4 SFQ Network Switch Prototype System and 10-Gbps Bit-Error-Rate Measurement

    NASA Astrophysics Data System (ADS)

    Kameda, Yoshio; Hashimoto, Yoshihito; Yorozu, Shinichi

    We developed a 4×4 SFQ network switch prototype system and demonstrated its operation at 10Gbps. The system's core is composed of two SFQ chips: a 4×4 switch and a 6-channel voltage driver. The 4×4 switch chip contained both a switch fabric (i. e. a data path) and a switch scheduler (i. e. a controller). Both chips were attached to a multichip-module (MCM) carrier, which was then installed in a cryocooled system with 32 10-Gbps ports. Each chip contained about 2100 Josephson junctions on a 5-mm×5-mm die. An NEC standard 2.5-kA/cm2 fabrication process was used for the switch chip. We increased the critical current density to 10kA/cm2 for the driver chip to improve speed while maintaining wide bias margins. MCM implementation enabled us to use a hybrid critical current density technology. Voltage pulses were transferred between two chips through passive transmission lines on the MCM carrier. The cryocooled system was cooled down to about 4K using a two-stage 1-W cryocooler. We correctly operated the whole system at 10Gbps. The switch scheduler, which is driven by an on-chip clock generator, operated at 40GHz. The speed gap between SFQ and room temperature devices was filled by on-chip SFQ FIFO buffers or shift registers. We measured the bit error rate at 10Gbps and found that it was on the order of 10-13 for the 4×4 SFQ switch fabric. In addition, using semiconductor interface circuitry, we built a four-port SFQ Ethernet switch. All the components except for a compressor were installed in a standard 19-inch rack, filling a space 21 U (933.5mm or 36.75 inches) in height. After four personal computers (PCs) were connected to the switch, we have successfully transferred video data between them.

  11. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  12. Can reading rate acceleration improve error monitoring and cognitive abilities underlying reading in adolescents with reading difficulties and in typical readers?

    PubMed

    Horowitz-Kraus, Tzipi; Breznitz, Zvia

    2014-01-28

    Dyslexia is characterized by slow, inaccurate reading and by deficits in executive functions. The deficit in reading is exemplified by impaired error monitoring, which can be specifically shown through neuroimaging, in changes in Error-/Correct-related negativities (ERN/CRN). The current study aimed to investigate whether a reading intervention program (Reading Acceleration Program, or RAP) could improve overall reading, as well as error monitoring and other cognitive abilities underlying reading, in adolescents with reading difficulties. Participants with reading difficulties and typical readers were trained with the RAP for 8 weeks. Their reading and error monitoring were characterized both behaviorally and electrophysiologically through a lexical decision task. Behaviorally, the reading training improved "contextual reading speed" and decreased reading errors in both groups. Improvements were also seen in speed of processing, memory and visual screening. Electrophysiologically, ERN increased in both groups following training, but the increase was significantly greater in the participants with reading difficulties. Furthermore, an association between the improvement in reading speed and the change in difference between ERN and CRN amplitudes following training was seen in participants with reading difficulties. These results indicate that improving deficits in error monitoring and speed of processing are possible underlying mechanisms of the RAP intervention. We suggest that ERN is a good candidate for use as a measurement in evaluating the effect of reading training in typical and disabled readers. PMID:24316242

  13. An Observational Study of the Impact of a Computerized Physician Order Entry System on the Rate of Medication Errors in an Orthopaedic Surgery Unit

    PubMed Central

    Hernandez, Fabien; Majoul, Elyes; Montes-Palacios, Carlota; Antignac, Marie; Cherrier, Bertrand; Doursounian, Levon; Feron, Jean-Marc; Robert, Cyrille; Hejblum, Gilles; Fernandez, Christine; Hindlet, Patrick

    2015-01-01

    Aim To assess the impact of the implementation of a Computerized Physician Order Entry (CPOE) associated with a pharmaceutical checking of medication orders on medication errors in the 3 stages of drug management (i.e. prescription, dispensing and administration) in an orthopaedic surgery unit. Methods A before-after observational study was conducted in the 66-bed orthopaedic surgery unit of a teaching hospital (700 beds) in Paris France. Direct disguised observation was used to detect errors in prescription, dispensing and administration of drugs, before and after the introduction of computerized prescriptions. Compliance between dispensing and administration on the one hand and the medical prescription on the other hand was studied. The frequencies and types of errors in prescribing, dispensing and administration were investigated. Results During the pre and post-CPOE period (two days for each period) 111 and 86 patients were observed, respectively, with corresponding 1,593 and 1,388 prescribed drugs. The use of electronic prescribing led to a significant 92% decrease in prescribing errors (479/1593 prescribed drugs (30.1%) vs 33/1388 (2.4%), p < 0.0001) and to a 17.5% significant decrease in administration errors (209/1222 opportunities (17.1%) vs 200/1413 (14.2%), p < 0.05). No significant difference was found in regards to dispensing errors (430/1219 opportunities (35.3%) vs 449/1407 (31.9%), p = 0.07). Conclusion The use of CPOE and a pharmacist checking medication orders in an orthopaedic surgery unit reduced the incidence of medication errors in the prescribing and administration stages. The study results suggest that CPOE is a convenient system for improving the quality and safety of drug management. PMID:26207363

  14. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  15. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  16. Effect of the Transcendental Meditation Program on Graduation, College Acceptance and Dropout Rates for Students Attending an Urban Public High School

    ERIC Educational Resources Information Center

    Colbert, Robert D.

    2013-01-01

    High school graduation rates nationally have declined in recent years, despite public and private efforts. The purpose of the current study was to determine whether practice of the Quiet Time/Transcendental Meditation® program at a medium-size urban school results in higher school graduation rates compared to students who do not receive training…

  17. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    Sampling algorithm, the proposed geostatistical realization is first evaluated through the approximate model to decide whether it is useful or not to perform a full physics simulation. This improves the acceptance rate of full physics simulations and opens the door to iteratively test the performance and improve the quality of the error model.

  18. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  20. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  1. 27 CFR 46.120 - Errors discovered on inspection.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Errors discovered on inspection. When a TTB officer discovers on a special tax stamp a material error in... amended return and an acceptable explanation for the error, the officer will make the proper correction on the stamp and return it to the taxpayer. However, if the error found by the TTB officer is on...

  2. 27 CFR 46.120 - Errors discovered on inspection.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Errors discovered on inspection. When a TTB officer discovers on a special tax stamp a material error in... amended return and an acceptable explanation for the error, the officer will make the proper correction on the stamp and return it to the taxpayer. However, if the error found by the TTB officer is on...

  3. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  4. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  5. Diffusion of innovation I: Formulary acceptance rates of new drugs in teaching and non-teaching British Columbia hospitals--a hospital pharmacy perspective.

    PubMed

    D'Sa, M M; Hill, D S; Stratton, T P

    1994-12-01

    Lag times in the diffusion of new drugs in the hospital setting have both patient care and pharmaceutical industry implications. This two-part series uses diffusion theory to examine differences in the adoption rates of new drugs in British Columbia teaching and non-teaching hospitals. Formulary addition of a new drug by a hospital's Pharmacy and Therapeutics Committee was considered the adoption indicator. Time for adoption was defined as the difference between a drug's Canadian market approval date and the date of formulary addition. Surveys were mailed in September 1990 to 41 hospital pharmacies (response rate = 88%), asking respondents to provide formulary inclusion dates of 29 drugs marketed between July 1987 and March 1990. A significant difference (Mann-Whitney U Test, p < 0.0358) in median adoption time was observed between the six teaching and 25 non-teaching study hospitals, with the former adopting a new drug in 7.5 months versus the latter adopting a new drug in 12.1 months. PMID:10139270

  6. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  7. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  8. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  9. The Roles of Verb Semantics, Entrenchment, and Morphophonology in the Retreat from Dative Argument-Structure Overgeneralization Errors

    ERIC Educational Resources Information Center

    Ambridge, Ben; Pine, Julian M.; Rowland, Caroline F.; Chang, Franklin

    2012-01-01

    Children (aged five-to-six and nine-to-ten years) and adults rated the acceptability of well-formed sentences and argument-structure overgeneralization errors involving the prepositional-object and double-object dative constructions (e.g. "Marge pulled the box to Homer/*Marge pulled Homer the box"). In support of the entrenchment hypothesis, a…

  10. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  11. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  12. Study of Uncertainties of Predicting Space Shuttle Thermal Environment. [impact of heating rate prediction errors on weight of thermal protection system

    NASA Technical Reports Server (NTRS)

    Fehrman, A. L.; Masek, R. V.

    1972-01-01

    Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.

  13. Estimates of rates and errors for measurements of direct-{gamma} and direct-{gamma} + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  14. Improving Correct and Error Rate and Reading Comprehension Using Key Words and Previewing: A Case Report with a Language Minority Student.

    ERIC Educational Resources Information Center

    O'Donnell, Patricia; Weber, Kimberly P.; McLaughlin, T. F.

    2003-01-01

    The effects of key words and previewing on the rate of words read correctly and the reading comprehension of a language minority student (age 10) were analyzed. The student read more words correctly and answered more comprehension questions accurately after the material was previewed and key words were discussed. (Contains references.) (Author/CR)

  15. Perceptions of Social Behavior and Peer Acceptance in Kindergarten.

    ERIC Educational Resources Information Center

    Phillipsen, Leslie C.; Bridges, Sara K.; McLemore, T. Gayle; Saponaro, Lisa A.

    1999-01-01

    Used social behavior ratings from observers, teachers, and parents to predict kindergartners' perceptions of peer acceptance. Found that friendship skill predicted parent- and child-reported peer acceptance. Shyness/withdrawal inversely predicted teacher-reported peer acceptance. Aggression did not predict peer acceptance. Girls were rated as more…

  16. How does human error affect safety in anesthesia?

    PubMed

    Gravenstein, J S

    2000-01-01

    Anesthesia morbidity and mortality, while acceptable, are not zero. Most mishaps have a multifactorial cause in which human error plays a significant part. Good design of anesthesia machines, ventilators, and monitors can prevent some, but not all, human error. Attention to the system in which the errors occur is important. Modern training with simulators is designed to reduce the frequency of human errors and to teach anesthesiologists how to deal with the consequences of such errors. PMID:10601526

  17. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  18. Acceptability of Treatments for Plagiarism

    ERIC Educational Resources Information Center

    Carter, Stacy L.; Punyanunt-Carter, Narissra Maria

    2007-01-01

    This study focused on various treatments for addressing incidents of plagiarism by college students. College students rated the acceptability of different responses by college faculty to a case description of a college student who engaged in plagiarism. The findings revealed that students found some methods of addressing this problem behavior by…

  19. A general aviation simulator evaluation of a rate-enhanced instrument landing system display

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1981-01-01

    A piloted-simulation study was conducted to evaluate the effect on instrument landing system tracking performance of integrating localizer-error rate with raw localizer and glide-slope error. The display was named the pseudocommand tracking indicator (PCTI) because it provides an indication of the change of heading required to track the localizer center line. Eight instrument-rated pilots each flew five instrument approaches with the PCTI and five instrument approaches with a conventional course deviation indicator. The results show good overall pilot acceptance of the display, a significant improvement in localizer tracking error, and no significant changes in glide-slope tracking error or pilot workload.

  20. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  1. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  2. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., in accordance with 42 CFR 440 to 484.55 of the Code of Federal Regulations that are applicable to...) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed...

  3. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., in accordance with 42 CFR 440 to 484.55 of the Code of Federal Regulations that are applicable to...) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed...

  4. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., in accordance with 42 CFR 440 to 484.55 of the Code of Federal Regulations that are applicable to...) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed...

  5. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., in accordance with 42 CFR 440 to 484.55 of the Code of Federal Regulations that are applicable to...) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed...

  6. Barriers to Medical Error Reporting

    PubMed Central

    Poorolajal, Jalal; Rezaie, Shirin; Aghighi, Negar

    2015-01-01

    Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan, Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%), lack of proper reporting form (51.8%), lack of peer supporting a person who has committed an error (56.0%), and lack of personal attention to the importance of medical errors (62.9%). The rate of committing medical errors was higher in men (71.4%), age of 50–40 years (67.6%), less-experienced personnel (58.7%), educational level of MSc (87.5%), and staff of radiology department (88.9%). Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement. PMID:26605018

  7. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  8. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  9. Error detection and correction in an optoelectronic memory system

    NASA Astrophysics Data System (ADS)

    Hofmann, Robert; Pandey, Madhulima; Levitan, Steven P.; Chiarulli, Donald M.

    1998-11-01

    This paper describes the implementation of error detection and correction logic in the optoelectronic cache memory prototype at the University of Pittsburgh. In this project, our goal is to integrate a 3-D optical memory directly into the memory hierarchy of a personal computer. As with any optical storage system, error correction is essential to maintaining acceptable system performance. We have implemented a fully pipelined, real time decoder for 60-bit Spectral Reed-Solomon code words. The decoder is implemented in reconfigurable logic, using a single Xilinx 4000-series FPGA per code word and is fully scalable using multiple FPGA's. The current implementation operates at 33 Mhz, and processes two code words in parallel per clock cycle for an aggregate data rate of 4 Gb/s. We present a brief overview of the project and of Spectral Reed-Solomon codes followed by a description of our implementation and performance data.

  10. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  11. Dynamic errors in a tuned flexure-mounted strapdown gyro

    NASA Technical Reports Server (NTRS)

    Bortz, J. E., Sr.

    1972-01-01

    Motion induced errors in a tuned, flexure-mounted strapdown gyro are investigated. Analytic expressions are developed for errors induced by linear vibrations, angular motion, and detuning. Sensor-level errors (gyro drift rate) and system-level errors (navigation errors) that are stimulated by an actual dynamic motion environment are computed.

  12. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s-1) with polarisation division multiplexing

    NASA Astrophysics Data System (ADS)

    Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.

    2015-01-01

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.

  13. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s{sup -1}) with polarisation division multiplexing

    SciTech Connect

    Gurkin, N V; Konyshev, V A; Novikov, A G; Treshchikov, V N; Ubaydullaev, R R

    2015-01-31

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)

  14. Ligation errors in DNA computing.

    PubMed

    Aoi, Y; Yoshinobu, T; Tanizawa, K; Kinoshita, K; Iwasaki, H

    1999-10-01

    DNA computing is a novel method of computing proposed by Adleman (1994), in which the data is encoded in the sequences of oligonucleotides. Massively parallel reactions between oligonucleotides are expected to make it possible to solve huge problems. In this study, reliability of the ligation process employed in the DNA computing is tested by estimating the error rate at which wrong oligonucleotides are ligated. Ligation of wrong oligonucleotides would result in a wrong answer in the DNA computing. The dependence of the error rate on the number of mismatches between oligonucleotides and on the combination of bases is investigated. PMID:10636043

  15. Specific Impulse and Mass Flow Rate Error

    NASA Technical Reports Server (NTRS)

    Gregory, Don A.

    2005-01-01

    Specific impulse is defined in words in many ways. Very early in any text on rocket propulsion a phrase similar to .specific impulse is the thrust force per unit propellant weight flow per second. will be found.(2) It is only after seeing the mathematics written down does the definition mean something physically to scientists and engineers responsible for either measuring it or using someone.s value for it.

  16. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  17. ATLAS ACCEPTANCE TEST

    SciTech Connect

    J.C. COCHRANE; J.V. PARKER; ET AL

    2001-06-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  18. Correlates of Halo Error in Teacher Evaluation.

    ERIC Educational Resources Information Center

    Moritsch, Brian G.; Suter, W. Newton

    1988-01-01

    An analysis of 300 undergraduate psychology student ratings of teachers was undertaken to assess the magnitude of halo error and a variety of rater, ratee, and course characteristics. The raters' halo errors were significantly related to student effort in the course, previous experience with the instructor, and class level. (TJH)

  19. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  20. Acceptance threshold hypothesis is supported by chemical similarity of cuticular hydrocarbons in a stingless bee, Melipona asilvai.

    PubMed

    Nascimento, D L; Nascimento, F S

    2012-11-01

    The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level. PMID:23053920

  1. Correcting for Sequencing Error in Maximum Likelihood Phylogeny Inference

    PubMed Central

    Kuhner, Mary K.; McGill, James

    2014-01-01

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. PMID:25378476

  2. Learner Error, Affectual Stimulation, and Conceptual Change

    ERIC Educational Resources Information Center

    Allen, Michael

    2010-01-01

    Pupils' expectation-related errors oppose the development of an appropriate scientific attitude towards empirical evidence and the learning of accepted science content, representing a hitherto neglected area of research in science education. In spite of these apparent drawbacks, a pedagogy is described that "encourages" pupils to allow their…

  3. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  4. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  5. Acceptance, values, and probability.

    PubMed

    Steel, Daniel

    2015-10-01

    This essay makes a case for regarding personal probabilities used in Bayesian analyses of confirmation as objects of acceptance and rejection. That in turn entails that personal probabilities are subject to the argument from inductive risk, which aims to show non-epistemic values can legitimately influence scientific decisions about which hypotheses to accept. In a Bayesian context, the argument from inductive risk suggests that value judgments can influence decisions about which probability models to accept for likelihoods and priors. As a consequence, if the argument from inductive risk is sound, then non-epistemic values can affect not only the level of evidence deemed necessary to accept a hypothesis but also degrees of confirmation themselves. PMID:26386533

  6. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  7. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Cleary, Beverly

    1984-01-01

    Reprints the text of Ms. Cleary's Newbery medal acceptance speech in which she gives personal history concerning her development as a writer and her response to the letters she receives from children. (CRH)

  8. Caldecott Medal Acceptance.

    ERIC Educational Resources Information Center

    Provensen, Alice; Provensen, Martin

    1984-01-01

    Reprints the text of the Provensens' Caldecott medal acceptance speech in which they describe their early interest in libraries and literature, the collaborative aspect of their work, and their current interest in aviation. (CRH)

  9. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  10. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  11. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  12. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  13. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  14. A simulator evaluation of a rate-enhanced instrument landing system display

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1983-01-01

    A piloted simulation study was conducted to evaluate the effect on instrument landing system tracking performance of integrating localizer error rate information with the raw localizer error display. The resulting display was named the pseudo command tracking indicator (PCTI) because it provides an indication of any changes of heading required to track the localizer. Eight instrument-rated pilots each flew five instrument approaches with the PCTI and five instrument approaches with a conventional course deviation indicator. The results show good overall pilot acceptance of the PCTI and a significant reduction in localizer tracking error.

  15. Numerical Simulation of Coherent Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, Mark

    A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.

  16. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  17. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  18. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  19. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  20. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  1. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Medication errors during hospital drug rounds.

    PubMed Central

    Ridge, K W; Jenkins, D B; Noyce, P R; Barber, N D

    1995-01-01

    Objective--To determine the nature and rate of drug administration errors in one National Health Service hospital. Design--Covert observational survey be tween January and April 1993 of drug rounds with intervention to stop drug administration errors reaching the patient. Setting--Two medical, two surgical, and two medicine for the elderly wards in a former district general hospital, now a NHS trust hospital. Subjects--37 Nurses performing routine single nurse drug rounds. Main measures--Drug administration errors recorded by trained observers. Results--Seventy four drug rounds were observed in which 115 errors occurred during 3312 drug administrations. The overall error rate was 3.5% (95% confidence interval 2.9% to 4.1%). Errors owing to omissions, because the drug had not been supplied or located or the prescription had not been seen, accounted for most (68%, 78) of the errors. Wrong doses accounted for 15% (17) errors, four of which were greater than the prescribed dose. The dose was given within two hours of the time indicated by the prescriber in 98.2% of cases. Conclusion--The observed rate of drug administration errors is too high. It might be reduced by a multidisciplinary review of practices in prescribing, supply, and administration of drugs. PMID:10156392

  3. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  4. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    After committing an error, participants tend to perform more slowly. This phenomenon is called post-error slowing (PES). Although previous studies have explored the PES effect in the context of observed errors, the issue as to whether the slowing effect generalizes across tasksets remains unclear. Further, the generation mechanisms of PES following observed errors must be examined. To address the above issues, we employed an observation-execution task in three experiments. During each trial, participants were required to mentally observe the outcomes of their partners in the observation task and then to perform their own key-press according to the mapping rules in the execution task. In Experiment 1, the same tasksets were utilized in the observation task and the execution task, and three error rate conditions (20%, 50% and 80%) were established in the observation task. The results revealed that the PES effect after observed errors was obtained in all three error rate conditions, replicating and extending previous studies. In Experiment 2, distinct stimuli and response rules were utilized in the observation task and the execution task. The result pattern was the same as that in Experiment 1, suggesting that the PES effect after observed errors was a generic adjustment process. In Experiment 3, the response deadline was shortened in the execution task to rule out the ceiling effect, and two error rate conditions (50% and 80%) were established in the observation task. The PES effect after observed errors was still obtained in the 50% and 80% error rate conditions. However, the accuracy in the post-observed error trials was comparable to that in the post-observed correct trials, suggesting that the slowing effect and improved accuracy did not rely on the same underlying mechanism. Current findings indicate that the occurrence of PES after observed errors is not dependent on the probability of observed errors, consistent with the assumption of cognitive control account

  5. Digital system detects binary code patterns containing errors

    NASA Technical Reports Server (NTRS)

    Muller, R. M.; Tharpe, H. M., Jr.

    1966-01-01

    System of square loop magnetic cores associated with code input registers to react to input code patterns by reference to a group of control cores in such a manner that errors are canceled and patterns containing errors are accepted for amplification and processing. This technique improves reception capabilities in PCM telemetry systems.

  6. SEU induced errors observed in microprocessor systems

    SciTech Connect

    Asenek, V.; Underwood, C.; Oldfield, M.; Velazco, R.; Rezgui, S.; Cheynet, P.; Ecoffet, R.

    1998-12-01

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  7. Continuous error correction for Ising anyons

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Wootton, James R.

    2016-04-01

    Quantum gates in topological quantum computation are performed by braiding non-Abelian anyons. These braiding processes can presumably be performed with very low error rates. However, to make a topological quantum computation architecture truly scalable, even rare errors need to be corrected. Error correction for non-Abelian anyons is complicated by the fact that it needs to be performed on a continuous basis, and further errors may occur while we are correcting existing ones. Here, we prove the feasibility of this task, establishing non-Abelian anyons as a viable platform for scalable quantum computation. We thereby focus on Ising anyons as the most prominent example of non-Abelian anyons and show that for these a finite error rate can indeed be corrected continuously. There is a threshold error rate pc>0 such that for all error rates p error per time step can be made exponentially small in the distance of a logical qubit.

  8. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    PubMed

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods. PMID:21763337

  9. Four therapeutic diets: adherence and acceptability.

    PubMed

    Berkow, Susan E; Barnard, Neal; Eckart, Jill; Katcher, Heather

    2010-01-01

    Many health conditions are treated, at least in part, by therapeutic diets. Although the success of any intervention depends on its acceptability to the patient, the acceptability of therapeutic diets and factors that influence it have been largely neglected in nutrition research. A working definition of acceptability is proposed and an examination and summary are provided of available data on the acceptability of common diet regimens used for medical conditions. The goal is to suggest ways to improve the success of therapeutic diets. The proposed working definition of "acceptability" refers to the user's judgment of the advantages and disadvantages of a therapeutic diet-in relation to palatability, costs, and effects on eating behaviour and health-that influence the likelihood of adherence. Very low-calorie, reduced-fat omnivorous, vegetarian and vegan, and low-carbohydrate diets all achieve acceptability among the majority of users in studies of up to one year, in terms of attrition and adherence rates and results of questionnaires assessing eating behaviours. Longer studies are fewer, but they suggest that vegetarian, vegan, and reduced-fat diets are acceptable, as indicated by sustained changes in nutrient intake. Few studies of this length have been published for very low-calorie or low-carbohydrate diets. Long-term studies of adherence and acceptability of these and other therapeutic diets are warranted. PMID:21144137

  10. The acceptability of ending a patient's life

    PubMed Central

    Guedj, M; Gibert, M; Maudet, A; Munoz, S; Mullet, E; Sorum, P

    2005-01-01

    Objectives: To clarify how lay people and health professionals judge the acceptability of ending the life of a terminally ill patient. Design: Participants judged this acceptability in a set of 16 scenarios that combined four factors: the identity of the actor (patient or physician), the patient's statement or not of a desire to have his life ended, the nature of the action as relatively active (injecting a toxin) or passive (disconnecting life support), and the type of suffering (intractable physical pain, complete dependence, or severe psychiatric illness). Participants: 115 lay people and 72 health professionals (22 nurse's aides, 44 nurses, six physicians) in Toulouse, France. Main measurements: Mean acceptability ratings for each scenario for each group. Results: Life ending interventions are more acceptable to lay people than to the health professionals. For both, acceptability is highest for intractable physical suffering; is higher when patients end their own lives than when physicians do so; and, when physicians are the actors, is higher when patients have expressed a desire to die (voluntary euthanasia) than when they have not (involuntary euthanasia). In contrast, when patients perform the action, acceptability for the lay people and nurse's aides does not depend on whether the patient has expressed a desire to die, while for the nurses and physicians unassisted suicide is more acceptable than physician assisted suicide. Conclusions: Lay participants judge the acceptability of life ending actions in largely the same way as do healthcare professionals. PMID:15923476

  11. Accept or divert?

    PubMed

    Angelucci, P A

    1999-09-01

    Stretching scarce resources is more than a managerial issue. Should you accept the patient to an understaffed ICU or divert him to another facility? The intense "medical utility" controversy focuses on a situation that critical care nurses now face every day. PMID:10614370

  12. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  13. 1984 Newbery Acceptance Speech.

    ERIC Educational Resources Information Center

    Cleary, Beverly

    1984-01-01

    This acceptance speech for an award honoring "Dear Mr. Henshaw," a book about feelings of a lonely child of divorce intended for eight-, nine-, and ten-year-olds, highlights children's letters to author. Changes in society that affect children, the inception of "Dear Mr. Henshaw," and children's reactions to books are highlighted. (EJS)

  14. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  15. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  16. Burst error correction extensions for large Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Owsley, P.

    1990-01-01

    Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.

  17. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  18. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  19. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    Purpose: There is increased interest in the Radiation Oncology Physics community regarding sensitivity of pre-treatment IMRT/VMAT QA to delivery errors. Consequently, tools mapping pre-treatment QA to the patient DVH have been developed. However, the quantity of plan degradation that is acceptable remains uncertain. Using DVHs adapted from prior patients’ plans, we developed a technique to determine the magnitude of various delivery errors required to degrade a treatment plan to outside the clinically accepted range. Methods: DVHs for relevant organs at risk were adapted from a population of prior patients’ plans using a machine learning algorithm to establish the clinically acceptable DVH range specific to the patient’s anatomy. We applied this technique to six low-risk prostate cancer patients treated with single-arc VMAT and compared error-induced DVH changes to the adapted DVHs to determine the magnitude of error required to push the plan outside of the acceptable range. The procedure follows: (1) Errors (systematic ' random shift of MLCs, gantry-MLC desynchronization, dose rate fluctuations, etc.) were simulated and degraded DVHs calculated using the Varian Eclipse TPS. (2) Adapted DVHs and acceptable ranges for DVHs were established. (3) Relevant dosimetric indices and corresponding acceptable ranges were calculated from the DVHs. Key indices included NTCP (Lyman-Kutcher-Burman Model) and QUANTEC’s dose-volume Objectives: s of V75Gy≤0.15 for the rectum and V75Gy≤0.25 for the bladder. Results: Degradations to the clinical plan became “unacceptable” for 19±29mm and 1.9±2.0mm systematic outward shifts of a single leaf and leaf bank, respectively. All other simulated errors fell within the acceptable range. Conclusion: Utilizing machine learning and prior patients’ plans one can predict a clinically acceptable range of DVH degradation for a specific patient. Comparing error-induced DVH degradations to this range, it is shown that single

  20. The incidence of diagnostic error in medicine.

    PubMed

    Graber, Mark L

    2013-10-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using 'trigger tools' to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  1. Drug errors: consequences, mechanisms, and avoidance.

    PubMed

    Glavin, R J

    2010-07-01

    Medication errors are common throughout healthcare and result in significant human and financial cost. Prospective studies suggest that the error rate in anaesthesia is around one error in every 133 anaesthetics. There are several categories of medication error ranging from slips and lapses to fixation errors and deliberate violations. Violations may be more likely in organizations with a tendency to blame front-line workers, a tendency to deny the existence of latent conditions, and a blinkered pursuit of productivity indicators. In these organizations, borderline-tolerated conditions of use may occur which blur the distinction between safe and unsafe practice. Latent conditions will also make the error at the 'sharp end' more likely to result in actual patient harm. Several complementary strategies are proposed which may result in fewer medication errors. At the organizational level, developing a safety culture and promoting robust error reporting systems is key. The individual anaesthetist can play a part in this, setting an example to other members of the team in vigilance for errors, creating a safety climate with psychological safety, and reporting and learning from errors. PMID:20507858

  2. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  3. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  4. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  5. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  6. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  7. The pathophysiology of medication errors: how and where they arise

    PubMed Central

    McDowell, Sarah E; Ferner, Harriet S; Ferner, Robin E

    2009-01-01

    Errors arise when an action is intended but not performed; errors that arise from poor planning or inadequate knowledge are characterized as mistakes; those that arise from imperfect execution of well-formulated plans are called slips when an erroneous act is committed and lapses when a correct act is omitted. Some tasks are intrinsically prone to error. Examples are tasks that are unfamiliar to the operator or performed under pressure. Tasks that require the calculation of a dosage or dilution are especially susceptible to error. The tasks of prescribing, preparation, and administration of medicines are complex, and are carried out within a complex system; errors can occur at each of many steps and the error rate for the overall process is therefore high. The error rate increases when health-care professionals are inexperienced, inattentive, rushed, distracted, fatigued, or depressed; orthopaedic surgeons and nurses may be more likely than other health-care professionals to make medication errors. Medication error rates in hospital are higher in paediatric departments and intensive care units than elsewhere. Rates of medication errors may be higher in very young or very old patients. Intravenous antibiotics are the drugs most commonly involved in medication errors in hospital; antiplatelet agents, diuretics, and non-steroidal anti-inflammatory drugs are most likely to account for ‘preventable admissions’. Computers effectively reduce the rates of easily counted errors. It is not clear whether they can save lives lost through rare but dangerous errors in the medication process. PMID:19594527

  8. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  9. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  10. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  11. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  12. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  13. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  14. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  15. A class of error estimators based on interpolating the finite element solutions for reaction-diffusion equations

    SciTech Connect

    Lin, T.; Wang, H.

    1995-12-31

    The swift improvement of computational capabilities enables us to apply finite element methods to simulate more and more problems arising from various applications. A fundamental question associated with finite element simulations is their accuracy. In other words, before we can make any decisions based on the numerical solutions, we must be sure that they are acceptable in the sense that their errors are within the given tolerances. Various estimators have been developed to assess the accuracy of finite element solutions, and they can be classified basically into two types: a priori error estimates and a posteriori error estimates. While a priori error estimates can give us asymptotic convergence rates of numerical solutions in terms of the grid size before the computations, they depend on certain Sobolev norms of the true solutions which are not known, in general. Therefore, it is difficult, if not impossible, to use a priori estimates directly to decide whether a numerical solution is acceptable or a finer partition (and so a new numerical solution) is needed. In contrast, a posteriori error estimates depends only on the numerical solutions, and they usually give computable quantities about the accuracy of the numerical solutions.

  16. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. PMID:25631226

  17. Reporting of errors by healthcare professionals.

    PubMed

    2010-10-01

    The realisation that an error has been committed, and the courage to discuss it openly, opens the way to a constructive process to improve one's professional practices, in interaction with healthcare organisations. Reporting errors to adverse events programmes is influenced by the impact of errors on healthcare professionals and their fears about the outcome and disclosure.The low rate of spontaneous reporting results from the obstacles encountered by healthcare professionals and reflects their attitudes towards their own errors. The way in which individuals make errors and handle adverse events reveals a lot about their personality and how they view themselves as professionals. It is not easy to report errors and it depends on the individuals concerned. Healthcare professionals' "reflexivity" (their ability to reflect on their own actions) is an integral part of their professional skills; it is an essential resource for analysing errors and improving quality of care. Reporting an error to a programme such as Prescrire's Preventing the Preventable is a conscious, professional act. It is both lucid and responsible, and part of a commitment to improving professional practice and skills, at the individual and institutional level. Learning from errors in order to prevent them from happening again supports the development of a quality and safety culture that should be encouraged among healthcare professionals. PMID:21180385

  18. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses. PMID:17920932

  19. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  20. Facts about Refractive Errors

    MedlinePlus

    ... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...

  1. Errors in prenatal diagnosis.

    PubMed

    Anumba, Dilly O C

    2013-08-01

    Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900

  2. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  3. Pronominal Case-Errors

    ERIC Educational Resources Information Center

    Kaper, Willem

    1976-01-01

    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  4. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  5. Maternal acceptance of human papillomavirus vaccine in Malaysia.

    PubMed

    Sam, I-Ching; Wong, Li-Ping; Rampal, Sanjay; Leong, Yin-Hui; Pang, Chan-Fu; Tai, Yong-Ting; Tee, Hwee-Ching; Kahar-Bador, Maria

    2009-06-01

    Acceptability rates of human papillomavirus (HPV) vaccination by 362 Malaysian mothers were 65.7% and 55.8% for daughters and sons, respectively. Younger mothers, and those who knew someone with cancer, were more willing to vaccinate their daughters. If the vaccine was routine and cost free, acceptability rate was 97.8%. PMID:19465327

  6. The temporal dynamics of emotional acceptance: Experience, expression, and physiology.

    PubMed

    Dan-Glauser, Elise S; Gross, James J

    2015-05-01

    Emotional acceptance has begun to attract considerable attention from researchers and clinicians alike. It is not yet clear, however, what effects emotional acceptance has on early emotion response dynamics. To address this question, participants (N = 37) were shown emotional pictures and cued either to simply attend to them, or to accept or suppress their emotional responses. Continuous measures of emotion experience, expressive behavior, and autonomic responses were obtained. Results indicated that, compared to no regulation, acceptance led to more positive emotions, transiently enhanced expressivity, and lowered respiratory rate. Compared to suppression, acceptance led to more positive emotions, stronger expressivity, and smaller changes in heart rate, blood pressure, and pulse amplitude, as well as greater oxygenation. Acceptance and suppression thus have opposite effects on emotional response dynamics. Because acceptance enhances positive emotion experience and expression, this strategy may be particularly useful in facilitating social interactions. PMID:25782407

  7. Error-Compensated Telescope

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.

    1989-01-01

    Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.

  8. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  9. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  10. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  11. Children acceptance of laser dental treatment

    NASA Astrophysics Data System (ADS)

    Lazea, Andreea; Todea, Carmen

    2016-03-01

    Objectives: To evaluate the dental anxiety level and the degree of acceptance of laser assisted pedodontic treatments from the children part. Also, we want to underline the advantages of laser use in pediatric dentistry, to make this technology widely used in treating dental problems of our children patients. Methods: Thirty pediatric dental patients presented in the Department of Pedodontics, University of Medicine and Pharmacy "Victor Babeş", Timişoara were evaluated using the Wong-Baker pain rating scale, wich was administered postoperatory to all patients, to assess their level of laser therapy acceptance. Results: Wong-Baker faces pain rating scale (WBFPS) has good validity and high specificity; generally it's easy for children to use, easy to compare and has good feasibility. Laser treatment has been accepted and tolerated by pediatric patients for its ability to reduce or eliminate pain. Around 70% of the total sample showed an excellent acceptance of laser dental treatment. Conclusions: Laser technology is useful and effective in many clinical situations encountered in pediatric dentistry and a good level of pacient acceptance is reported during all laser procedures on hard and soft tissues.

  12. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  14. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  15. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  16. Recognition errors by honey bee (Apis mellifera) guards demonstrate overlapping cues in conspecific recognition

    PubMed Central

    Couvillon, Margaret J; Roy, Gabrielle G F; Ratnieks, Francis L W

    2015-01-01

    Summary Honey bee (Apis mellifera) entrance guards discriminate nestmates from intruders. We tested the hypothesis that the recognition cues between nestmate bees and intruder bees overlap by comparing their acceptances with that of worker common wasps, Vespula vulgaris, by entrance guards. If recognition cues of nestmate and non-nestmate bees overlap, we would expect recognition errors. Conversely, we hypothesised that guards would not make errors in recognizing wasps because wasps and bees should have distinct, non-overlapping cues. We found both to be true. There was a negative correlation between errors in recognizing nestmates (error: reject nestmate) and nonnestmates (error: accept non-nestmate) bees such that when guards were likely to reject nestmates, they were less likely to accept a nonnestmate; conversely, when guards were likely to accept a non-nestmate, they were less likely to reject a nestmate. There was, however, no correlation between errors in the recognition of nestmate bees (error: reject nestmate) and wasps (error: accept wasp), demonstrating that guards were able to reject wasps categorically. Our results strongly support that overlapping cue distributions occur, resulting in errors and leading to adaptive shifts in guard acceptance thresholds PMID:26005220

  17. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  18. Using Errors by Guard Honeybees (Apis mellifera) to Gain New Insights into Nestmate Recognition Signals.

    PubMed

    Pradella, Duccio; Martin, Stephen J; Dani, Francesca R

    2015-11-01

    Although the honeybee (Apis mellifera) is one of the world most studied insects, the chemical compounds used in nestmate recognition, remains an open question. By exploiting the error prone recognition system of the honeybee, coupled with genotyping, we studied the correlation between cuticular hydrocarbon (CHC) profile of returning foragers and acceptance or rejection behavior by guards. We revealed an average recognition error rate of 14% across 3 study colonies, that is, allowing a non-nestmate colony entry, or preventing a nestmate from entry, which is lower than reported in previous studies. By analyzing CHCs, we found that CHC profile of returning foragers correlates with acceptance or rejection by guarding bees. Although several CHC were identified as potential recognition cues, only a subset of 4 differed consistently for their relative amount between accepted and rejected individuals in the 3 studied colonies. These include a unique group of 2 positional alkene isomers (Z-8 and Z-10), which are almost exclusively produced by the bees Bombus and Apis spp, and may be candidate compounds for further study. PMID:26385960

  19. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  20. An error control system with multiple-stage forward error corrections

    NASA Technical Reports Server (NTRS)

    Takata, Toyoo; Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1990-01-01

    A robust error-control coding system is presented. This system is a cascaded FEC (forward error control) scheme supported by parity retransmissions for further error correction in the erroneous data words. The error performance and throughput efficiency of the system are analyzed. Two specific examples of the error-control system are studied. The first example does not use an inner code, and the outer code, which is not interleaved, is a shortened code of the NASA standard RS code over GF(28). The second example, as proposed for NASA, uses the same shortened RS code as the base outer code C2, except that it is interleaved to a depth of 2. It is shown that both examples provide high reliability and throughput efficiency even for high channel bit-error rates in the range of 0.01.

  1. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    SciTech Connect

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  2. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  3. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1999-10-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.

  4. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1998-12-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.

  5. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  6. Sonic boom acceptability studies

    NASA Astrophysics Data System (ADS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; McCurdy, David A.

    1992-04-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  7. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  8. Effect of channel errors on delta modulation transmission

    NASA Technical Reports Server (NTRS)

    Rosenberg, W. J.

    1973-01-01

    We have considered the response of a variable step size delta modulator communication system, to errors caused by a noisy channel. For the particular adaptive delta modulation scheme proposed by Song, Garodnick, and Schilling (1971), we have a simple analytic formulation of the output error propagation due to a single channel error. It is shown that single channel errors cause a change in the amplitude and dc level of the output, but do not otherwise affect the shape of the output waveform. At low channel error rates, these effects do not cause any degradation in audio transmission. Higher channel error rates cause overflow or saturation of the step size register. We present relationships between channel error rate, register size, and the probability of register overflow.

  9. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    NASA Astrophysics Data System (ADS)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no

  10. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  11. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  12. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  13. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  14. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  15. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  16. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272

  17. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  18. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  19. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  20. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  1. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  2. Foliated Quantum Error-Correcting Codes

    NASA Astrophysics Data System (ADS)

    Bolt, A.; Duclos-Cianci, G.; Poulin, D.; Stace, T. M.

    2016-08-01

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation.

  3. Foliated Quantum Error-Correcting Codes.

    PubMed

    Bolt, A; Duclos-Cianci, G; Poulin, D; Stace, T M

    2016-08-12

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation. PMID:27563942

  4. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  5. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  6. Using Bit Errors To Diagnose Fiber-Optic Links

    NASA Technical Reports Server (NTRS)

    Bergman, L. A.; Hartmayer, R.; Marelid, S.

    1989-01-01

    Technique for diagnosis of fiber-optic digital communication link in local-area network of computers based on measurement of bit-error rates. Variable optical attenuator inserted in optical fiber to vary power of received signal. Bit-error rate depends on ratio of peak signal power to root-mean-square noise in receiver. For optimum measurements, one selects bit-error rate between 10 to negative 8th power and 10 to negative 4th power. Greater rates result in low accuracy in determination of signal-to-noise ratios, while lesser rates require impractically long measurement times.

  7. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  8. 5 CFR 531.409 - Acceptable level of competence determinations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS PAY UNDER THE GENERAL SCHEDULE Within-Grade Increases § 531.409 Acceptable level of competence... competence in his or her current position, and the employee has not been given a performance rating in any... acceptable level of competence, the within-grade increase will be granted retroactively to the beginning...

  9. The Impact of Logistical Resources on Prereferral Team Acceptability

    ERIC Educational Resources Information Center

    Yetter, Georgette; Doll, Beth

    2007-01-01

    This study investigated the impact of logistical resources on the acceptability of student assistance team consultation to school staff. Elementary and middle school staff (N=113) completed a measure of the acceptability of prereferral intervention team procedures while also rating the importance of five logistical supports for effective team…

  10. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  11. Encoding of Sensory Prediction Errors in the Human Cerebellum

    PubMed Central

    Schlerf, John; Ivry, Richard B.; Diedrichsen, Jörn

    2015-01-01

    A central tenet of motor neuroscience is that the cerebellum learns from sensory prediction errors. Surprisingly, neuroimaging studies have not revealed definitive signatures of error processing in the cerebellum. Furthermore, neurophysiologic studies suggest an asymmetry, such that the cerebellum may encode errors arising from unexpected sensory events, but not errors reflecting the omission of expected stimuli. We conducted an imaging study to compare the cerebellar response to these two types of errors. Participants made fast out-and-back reaching movements, aiming either for an object that delivered a force pulse if intersected or for a gap between two objects, either of which delivered a force pulse if intersected. Errors (missing the target) could therefore be signaled either through the presence or absence of a force pulse. In an initial analysis, the cerebellar BOLD response was smaller on trials with errors compared with trials without errors. However, we also observed an error-related decrease in heart rate. After correcting for variation in heart rate, increased activation during error trials was observed in the hand area of lobules V and VI. This effect was similar for the two error types. The results provide evidence for the encoding of errors resulting from either the unexpected presence or unexpected absence of sensory stimulation in the human cerebellum. PMID:22492047

  12. Risk comparisons, conflict, and risk acceptability claims.

    PubMed

    Johnson, Branden B

    2004-02-01

    Despite many claims for and against the use of risk comparisons in risk communication, few empirical studies have explored their effect. Even fewer have examined the public's relative preferences among different kinds of risk comparisons. Two studies, published in this journal in 1990 and 2003, used seven measures of "acceptability" to examine public reaction to 14 examples of risk comparisons, as used by a hypothetical factory manager to explain risks of his ethylene oxide plant. This study examined the effect on preferences of scenarios involving low or high conflict between the factory manager and residents of the hypothetical town (as had the 2003 study), and inclusion of a claim that the comparison demonstrated the risks' acceptability. It also tested the Finucane et al. (2000) affect hypothesis that information emphasizing low risks-as in these risk comparisons-would raise benefits estimates without changing risk estimates. Using similar but revised scenarios, risk comparison examples (10 instead of 14), and evaluation measures, an opportunity sample of 303 New Jersey residents rated the comparisons, and the risks and benefits of the factory. On average, all comparisons received positive ratings on all evaluation measures in all conditions. Direct and indirect measures showed that the conflict manipulation worked; overall, No-Conflict and Conflict scenarios evoked scores that were not significantly different. The attachment to each risk comparison of a risk acceptability claim ("So our factory's risks should be acceptable to you.") did not worsen ratings relative to conditions lacking this claim. Readers who did or did not see this claim were equally likely to infer an attempt to persuade them to accept the risk from the comparison. As in the 2003 article, there was great individual variability in inferred rankings of the risk comparisons. However, exposure to the risk comparisons did not reduce risk estimates significantly (while raising benefit estimates

  13. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  14. Accepters and Rejecters of Counseling.

    ERIC Educational Resources Information Center

    Rose, Harriett A.; Elton, Charles F.

    Personality differences between students who accept or reject proffered counseling assistance were investigated by comparing personality traits of 116 male students at the University of Kentucky who accepted or rejected letters of invitation to group counseling. Factor analysis of Omnibus Personality Inventory (OPI) scores to two groups of 60 and…

  15. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  16. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  17. Error Threshold of Fully Random Eigen Model

    NASA Astrophysics Data System (ADS)

    Li, Duo-Fang; Cao, Tian-Guang; Geng, Jin-Peng; Qiao, Li-Hua; Gu, Jian-Zhong; Zhan, Yong

    2015-01-01

    Species evolution is essentially a random process of interaction between biological populations and their environments. As a result, some physical parameters in evolution models are subject to statistical fluctuations. In this work, two important parameters in the Eigen model, the fitness and mutation rate, are treated as Gaussian distributed random variables simultaneously to examine the property of the error threshold. Numerical simulation results show that the error threshold in the fully random model appears as a crossover region instead of a phase transition point, and as the fluctuation strength increases the crossover region becomes smoother and smoother. Furthermore, it is shown that the randomization of the mutation rate plays a dominant role in changing the error threshold in the fully random model, which is consistent with the existing experimental data. The implication of the threshold change due to the randomization for antiviral strategies is discussed.

  18. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  19. Human decision error (HUMDEE) trees

    SciTech Connect

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  20. Evaluating a medical error taxonomy.

    PubMed Central

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789

  1. Acceptance and confidence of central and peripheral misinformation.

    PubMed

    Luna, Karlos; Migueles, Malen

    2009-11-01

    We examined the memory for central and peripheral information concerning a crime and the acceptance of false information. We also studied eyewitnesses' confidence in their memory. Participants were shown a video depicting a bank robbery and a questionnaire was used to introduce false central and peripheral information. The next day the participants completed a recognition task in which they rated the confidence of their responses. Performance was better for central information and participants registered more false alarms for peripheral contents. The cognitive system's limited attentional capacity and the greater information capacity of central elements may facilitate processing the more important information. The presentation of misinformation seriously impaired eyewitness memory by prompting a more lenient response criterion. Participants were more confident with central than with peripheral information. Eyewitness memory is easily distorted in peripheral aspects but it is more difficult to make mistakes with central information. However, when false information is introduced, errors in central information can be accompanied by high confidence, thus rendering them credible and legally serious. PMID:19899643

  2. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  3. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  4. Acceptability of alternative treatments for deviant child behavior.

    PubMed Central

    Kazdin, A E

    1980-01-01

    The acceptability of alternative treatments for deviant child behavior was evaluated in two experiments. In each experiment, clinical cases were described to undergraduate students along with four different treatments in a Replicated Latin Square Design. The treatments included reinforcement of incomparible behavior, time out from reinforcement, drug therapy, and electric shock and the treatments were described as they were appliedto children with problem behaviors. Experiment 1 developed an assessment device to evaluate treatment acceptability and examined whether treatments were rated as differentially acceptable. Experiment 2 replicated the first experiment and examined whether the severity of the presenting clinical problem influenced ratings of acceptability. The results indicated that treatments were sharply distinguished in overall acceptability. Reinforcement of incompatible behavior was more acceptable than other treatments which followed, in order, time out from reinforcement, drug therapy, and electric shock. Case severity influenced acceptability of alternative treatments with all treatments being rated as more acceptable with more severe cases. However, the strength of case severity was relatively small in relation to the different treatment conditions themselves which accounted for large portions of variance. PMID:7380752

  5. Systematic lossy forward error protection for error-resilient digital video broadcasting

    NASA Astrophysics Data System (ADS)

    Rane, Shantanu D.; Aaron, Anne; Girod, Bernd

    2004-01-01

    We present a novel scheme for error-resilient digital video broadcasting,using the Wyner-Ziv coding paradigm. We apply the general framework of systematic lossy source-channel coding to generate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-coded bitstream, which is transmitted over the error-prone channel without forward error correction.The supplementary bitstream is a low rate representation of the transmitted video sequence generated using Wyner-Ziv encoding. We use the conventionally decoded error-concealed MPEG video sequence as side information to decode the Wyner-Ziv bits. The decoder combines the error-prone side information and the Wyner-Ziv description to yield an improved decoded video signal. Our results indicate that, over a large range of channel error probabilities, this scheme yields superior video quality when compared with traditional forward error correction techniques employed in digital video broadcasting.

  6. Acetaminophen attenuates error evaluation in cortex.

    PubMed

    Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C

    2016-06-01

    Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161

  7. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  8. Horizon Sensor Errors Calculated By Computer Models Compared With Errors Measured In Orbit

    NASA Astrophysics Data System (ADS)

    Ward, Kenneth A.; Hogan, Roger; Andary, James

    1982-06-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-7). The k)recLicted performance is compared with actual flight history.

  9. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  10. Triple-Error-Correcting Codec ASIC

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  11. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate. PMID:26510753

  12. Final Report for Dynamic Models for Causal Analysis of Panel Data. The Impact of Measurement Error in the Analysis of Log-Linear Rate Models: Monte Carlo Findings. Part III, Chapter 4.

    ERIC Educational Resources Information Center

    Carroll, Glenn R.; And Others

    This document is part of a series of chapters described in SO 011 759. The chapter advocates the analysis of event-histories (data giving the number, timing, and sequence of changes in a categorical dependent variable) with maximum likelihood estimators (MLE) applied to log-linear rate models. Results from a Monte Carlo investigation of the impact…

  13. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  14. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  15. Preventing medication errors in cancer chemotherapy.

    PubMed

    Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M

    1996-04-01

    Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort. PMID:8697025

  16. Non-acceptance of Technology Education by Teachers in the Field.

    ERIC Educational Resources Information Center

    Rogers, George E.; Mahler, Marty

    1994-01-01

    The Stages of Concern Questionnaire was completed by 45 Nebraska and 35 Idaho industrial technology teachers. Most Nebraska teachers failed to accept technology education. Although Idaho teachers had a higher acceptance rate, nearly 69% had not adopted it. (SK)

  17. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  18. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  19. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  20. Dose error analysis for a scanned proton beam delivery system.

    PubMed

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm(3) target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy. PMID:21076200

  1. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  2. Grammatical Errors and Communication Breakdown.

    ERIC Educational Resources Information Center

    Tomiyama, Machiko

    This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…

  3. Statistical mechanics of error-correcting codes

    NASA Astrophysics Data System (ADS)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  4. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  5. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  6. Medical device error.

    PubMed

    Goodman, Gerald R

    2002-12-01

    This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632

  7. Workload and environmental factors in hospital medication errors.

    PubMed

    Roseman, C; Booker, J M

    1995-01-01

    Nine hospital workload factors and seasonal changes in daylight and darkness were examined over a 5-year period in relation to nurse medication errors at a medical center in Anchorage, Alaska. Three workload factors, along with darkness, were found to be significant predictors of the risk of medication error. Errors increased with the number of patient days per month (OR/250 patient days = 1.61) and the number of shifts worked by temporary nursing staff (OR/10 shifts = 1.15); errors decreased with more overtime worked by permanent nursing staff members (OR/10 shifts = .85). Medication errors were 95% more likely in midwinter than in the fall, but the effect of increasing darkness was strongest; a 2-month delay was found between the level of darkness and the rate of errors. More than half of all medication errors occurred during the first 3 months of the year. PMID:7624233

  8. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  9. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  10. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    Promote Health and Well-being Among Middle School Educators. 20. A Systematic Review of Yoga-based Interventions for Objective and Subjective Balance Measures. 21. Disparities in Yoga Use: A Multivariate Analysis of 2007 National Health Interview Survey Data. 22. Implementing Yoga Therapy Adapted for Older Veterans Who Are Cancer Survivors. 23. Randomized, Controlled Trial of Yoga for Women With Major Depressive Disorder: Decreased Ruminations as Potential Mechanism for Effects on Depression? 24. Yoga Beyond the Metropolis: A Yoga Telehealth Program for Veterans. 25. Yoga Practice Frequency, Relationship Maintenance Behaviors, and the Potential Mediating Role of Relationally Interdependent Cognition. 26. Effects of Medical Yoga in Quality of Life, Blood Pressure, and Heart Rate in Patients With Paroxysmal Atrial Fibrillation. 27. Yoga During School May Promote Emotion Regulation Capacity in Adolescents: A Group Randomized, Controlled Study. 28. Integrated Yoga Therapy in a Single Session as a Stress Management Technique in Comparison With Other Techniques. 29. Effects of a Classroom-based Yoga Intervention on Stress and Attention in Second and Third Grade Students. 30. Improving Memory, Attention, and Executive Function in Older Adults with Yoga Therapy. 31. Reasons for Starting and Continuing Yoga. 32. Yoga and Stress Management May Buffer Against Sexual Risk-Taking Behavior Increases in College Freshmen. 33. Whole-systems Ayurveda and Yoga Therapy for Obesity: Outcomes of a Pilot Study. 34. Women�s Phenomenological Experiences of Exercise, Breathing, and the Body During Yoga for Smoking Cessation Treatment. 35. Mindfulness as a Tool for Trauma Recovery: Examination of a Gender-responsive Trauma-informed Integrative Mindfulness Program for Female Inmates. 36. Yoga After Stroke Leads to Multiple Physical Improvements. 37. Tele-Yoga in Patients With Chronic Obstructive Pulmonary Disease and Heart Failure: A Mixed-methods Study of Feasibility, Acceptability, and Safety

  11. Holography optical memory recorded with error correcting bits

    NASA Astrophysics Data System (ADS)

    Song, J. H.; Moon, I.; Lee, Y. H.

    2014-06-01

    A novel error correction method is proposed for volume holographic memory systems. In this method the information of two adjacent binary bits is recorded in the space between the two bits, which is used to correct the errors in the data bits. The new method is compared with (15, 5) Reed—Solomon code using the same redundancy of 200% as ours. It is shown that the new method achieves the similar bit error rate as the RS code.

  12. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  13. Experimental quantum error correction with high fidelity

    NASA Astrophysics Data System (ADS)

    Zhang, Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.81.2152 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ɛ to ˜ɛ2. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  14. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  15. Superdense coding interleaved with forward error correction

    DOE PAGESBeta

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  16. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  17. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  18. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective. PMID:26178457

  19. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  20. The 13 errors.

    PubMed

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  1. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. 5 CFR 846.724 - Belated elections and correction of administrative errors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... administrative errors. 846.724 Section 846.724 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED... Open Enrollment Elections Election Procedures § 846.724 Belated elections and correction of administrative errors. (a) Belated elections. The employing office may accept a belated election of FERS...

  3. 5 CFR 846.724 - Belated elections and correction of administrative errors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... administrative errors. 846.724 Section 846.724 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED... Open Enrollment Elections Election Procedures § 846.724 Belated elections and correction of administrative errors. (a) Belated elections. The employing office may accept a belated election of FERS...

  4. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    Purpose To evaluate refractive error, axial length, and relative peripheral refractive error before, during the year of, and after the onset of myopia in children who became myopic compared with emmetropes. Methods Subjects were 605 children 6 to 14 years of age who became myopic (at least −0.75 D in each meridian) and 374 emmetropic (between −0.25 D and + 1.00 D in each meridian at all visits) children participating between 1995 and 2003 in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Axial length was measured annually by A-scan ultrasonography. Relative peripheral refractive error (the difference between the spherical equivalent cycloplegic autorefraction 30° in the nasal visual field and in primary gaze) was measured using either of two autorefractors (R-1; Canon, Lake Success, NY [no longer manufactured] or WR 5100-K; Grand Seiko, Hiroshima, Japan). Refractive error was measured with the same autorefractor with the subjects under cycloplegia. Each variable in children who became myopic was compared to age-, gender-, and ethnicity-matched model estimates of emmetrope values for each annual visit from 5 years before through 5 years after the onset of myopia. Results In the sample as a whole, children who became myopic had less hyperopia and longer axial lengths than did emmetropes before and after the onset of myopia (4 years before through 5 years after for refractive error and 3 years before through 5 years after for axial length; P < 0.0001 for each year). Children who became myopic had more hyperopic relative peripheral refractive errors than did emmetropes from 2 years before onset through 5 years after onset of myopia (P < 0.002 for each year). The fastest rate of change in refractive error, axial length, and relative peripheral refractive error occurred during the year before onset rather than in any year after onset. Relative peripheral refractive error remained at a consistent level of hyperopia each

  5. Understanding the acceptance factors of an Hospital Information System: evidence from a French University Hospital

    PubMed Central

    Ologeanu-Taddei, R.; Morquin, D.; Domingo, H.; Bourret, R.

    2015-01-01

    The goal of this study was to examine the perceived usefulness, the perceived ease of use and the perceived behavioral control of a Hospital Information System (HIS) for the care staff. We administrated a questionnaire composed of open-end and closed questions, based on the main concepts of Technology Acceptance Model. As results, the perceived usefulness, ease of use and behavioral control (self-efficacy and organizational support) are correlated with medical occupations. As an example, we found that a half of the medical secretaries consider the HIS is ease of use, at the opposite to the anesthesiologists, surgeons and physicians. Medical secretaries reported also the highest rate of PBC and a high rate of PU. Pharmacists reported the highest rate of PU but a low rate of PBC, which is similar to the rate of the surgeons and physicians. Content analysis of open questions highlights factors influencing these constructs: ergonomics, errors in the documenting process, insufficient compatibility with the medical department or the occupational group. Consequently, we suggest that the gap between the perceptions of the different occupational groups may be explained by the use of different modules and by interdependency of the care stare staff. PMID:26958237

  6. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  7. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  8. Avoiding Evaluation Errors: Fairness in Appraising Employee Performance.

    ERIC Educational Resources Information Center

    Hartzell, Gary N.

    1995-01-01

    Rating errors in personnel evaluation are universal but may be minimized if administrators are aware of them. Research identifies seven types of error: unwarranted strictness, unwarranted leniency, central tendency, halo effect, recency, contrast, and attribution. Countermeasures include developing clear term definitions, using multiple raters,…

  9. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  10. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  11. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  12. Results of software error-data experiments

    NASA Technical Reports Server (NTRS)

    Finelli, George B.

    1988-01-01

    In order to evaluate existing software reliability models and proposed modeling approaches, a search was conducted for data on the software failure process. This search revealed that the data necessary for this evaluation were not available. As a result, a research effort was initiated by NASA to generate data on which to base the development of credible methods for assessing the reliability of software targeted for flight-crucial applications. Two sets of software error-data experiments were conducted by different research groups. The results of the experiments were consistent: errors caused by different faults in a program occurred at widely varying rates; program failure rates exhibited a log-linear trend with respect to the number of faults corrected; some faults were found to interact in either concealing or revealing ways; and contiguous regions of the input space which cause a program to generate errors, called error crystals, were found and characterized for some faults. Collectively, these experiments have produced information on software failure which must be accounted for in software reliability modeling approaches.

  13. The MAGNEX large acceptance spectrometer

    SciTech Connect

    Cavallaro, M.; Cappuzzello, F.; Cunsolo, A.; Carbone, D.; Foti, A.

    2010-03-01

    The main features of the MAGNEX large acceptance magnetic spectrometer are described. It has a quadrupole + dipole layout and a hybrid detector located at the focal plane. The aberrations due to the large angular (50 msr) and momentum (+- 13%) acceptance are reduced by an accurate hardware design and then compensated by an innovative software ray-reconstruction technique. The obtained resolution in energy, angle and mass are presented in the paper. MAGNEX has been used up to now for different experiments in nuclear physics and astrophysics confirming to be a multipurpose device.

  14. Error, signal, and the placement of Ctenophora sister to all other animals

    PubMed Central

    Whelan, Nathan V.; Kocot, Kevin M.; Moroz, Leonid L.

    2015-01-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  15. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  16. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  17. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  18. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  19. Teratogenic inborn errors of metabolism.

    PubMed Central

    Leonard, J. V.

    1986-01-01

    Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927

  20. Forward error correction for an atmospheric noise channel

    NASA Astrophysics Data System (ADS)

    Olson, Katharyn E.; Enge, Per K.

    1992-05-01

    Two Markov chains are employed to model the memory of the atmospheric noise channel. It derives the transition probabilities for these chains from atmospheric noise error processes that were recorded at 306 kHz. The models are then utilized to estimate the probability of codeword error, and compares these estimates to codeword error rates that are obtained directly from the recorded error processes. These comparisons are made for the Golay code with various bit interleaving depths, and for a Reed-Solomon code with a variety of symbol interleaving depths.

  1. Error robustness evaluation of H.264/MPEG-4 AVC

    NASA Astrophysics Data System (ADS)

    Halbach, Till; Olsen, Steffen

    2004-01-01

    The robustness of the recently ratified video compression standard H.264/MPEG-4 AVC against channel errors is evaluated with the focus on rate distortion matters. After a brief introduction of the standard and an explanation of its error-resistant features, it is investigated how the error resilience tools of H.264 can be deployed best for packet-wise transmission as in ATM, H.323, and IP-based services. Further, the performances of two error concealment strategies for use in an H.264-conform decoder are compared to each other.

  2. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  3. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  4. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  5. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  6. Physical examination. Frequently observed errors.

    PubMed

    Wiener, S; Nathanson, M

    1976-08-16

    A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266

  7. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  8. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  9. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  10. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  11. Acceptance test procedure for High Pressure Water Jet System

    SciTech Connect

    Crystal, J.B.

    1995-05-30

    The overall objective of the acceptance test is to demonstrate a combined system. This includes associated tools and equipment necessary to perform cleaning in the 105 K East Basin (KE) for achieving optimum reduction in the level of contamination/dose rate on canisters prior to removal from the KE Basin and subsequent packaging for disposal. Acceptance tests shall include necessary hardware to achieve acceptance of the cleaning phase of canisters. This acceptance test procedure will define the acceptance testing criteria of the high pressure water jet cleaning fixture. The focus of this procedure will be to provide guidelines and instructions to control, evaluate and document the acceptance testing for cleaning effectiveness and method(s) of removing the contaminated surface layer from the canister presently identified in KE Basin. Additionally, the desired result of the acceptance test will be to deliver to K Basins a thoroughly tested and proven system for underwater decontamination and dose reduction. This report discusses the acceptance test procedure for the High Pressure Water Jet.

  12. Nitrogen trailer acceptance test report

    SciTech Connect

    Kostelnik, A.J.

    1996-02-12

    This Acceptance Test Report documents compliance with the requirements of specification WHC-S-0249. The equipment was tested according to WHC-SD-WM-ATP-108 Rev.0. The equipment being tested is a portable contained nitrogen supply. The test was conducted at Norco`s facility.

  13. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  14. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  15. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  16. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  17. Error evaluation for difference approximations to ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Hammond, J. L., Jr.

    1971-01-01

    Method involves relationships between errors introduced by using finite sampling rates and parameters describing specific numerical method used. Procedurre is used in design and analysi of digital filters and simulators.

  18. A posteriori error estimator and error control for contact problems

    NASA Astrophysics Data System (ADS)

    Weiss, Alexander; Wohlmuth, Barbara I.

    2009-09-01

    In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

  19. Mapping DNA polymerase errors by single-molecule sequencing.

    PubMed

    Lee, David F; Lu, Jenny; Chang, Seungwoo; Loparo, Joseph J; Xie, Xiaoliang S

    2016-07-27

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replication product is tagged with a unique nucleotide sequence before amplification. This allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases. PMID:27185891

  20. Error control coding for meteor burst channels

    NASA Astrophysics Data System (ADS)

    Frederick, T. J.; Belkerdid, M. A.; Georgiopoulos, M.

    The performance of several error control coding schemes for a meteor burst channel is studied via analysis and simulation. These coding strategies are compared using the probability of successful transmission of a fixed size packet through a single burst as a performance measure. The coding methods are compared via simulation for several realizations of meteor burst. It is found that, based on complexity and probability of success, fixed-rate convolutional codes with soft decision Viterbi decoding provide better performance.