Science.gov

Sample records for acceptable error rate

  1. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  2. Instantaneous bit-error-rate meter

    NASA Astrophysics Data System (ADS)

    Slack, Robert A.

    1995-06-01

    An instantaneous bit error rate meter provides an instantaneous, real time reading of bit error rate for digital communications data. Bit error pulses are input into the meter and are first filtered in a buffer stage to provide input impedance matching and desensitization to pulse variations in amplitude, rise time and pulse width. The bit error pulses are transformed into trigger signals for a timing pulse generator. The timing pulse generator generates timing pulses for each transformed bit error pulse, and is calibrated to generate timing pulses having a preselected pulse width corresponding to the baud rate of the communications data. An integrator generates a voltage from the timing pulses that is representative of the bit error rate as a function of the data transmission rate. The integrated voltage is then displayed on a meter to indicate the bit error rate.

  3. Irreducible error rate in aeronautical satellite channels

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1988-01-01

    The irreducible error rate in aeronautical satellite systems is experimentally investigated. It is shown that the introduction of a delay in the multipath component of a Rician channel increases the channel irreducible error rate. However, since the carrier/multipath ratio is usually large for aeronautical applications, this rise in the irreducible error rate should not be interpreted as a practical limitation of aeronautical satellite communications.

  4. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  5. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  6. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395

  7. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  8. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories.

  9. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  11. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  13. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  14. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District...

  15. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352

  16. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.

  17. Logical error rate in the Pauli twirling approximation.

    PubMed

    Katabarwa, Amara; Geller, Michael R

    2015-09-30

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.

  18. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  19. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  20. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  1. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  2. Error Rates of Multiple F Tests in Factorial ANOVA Designs.

    ERIC Educational Resources Information Center

    Halderson, Judith S.; Glasnapp, Douglas R.

    The primary purpose of the present study was to investigate empirically the effect of multiple hypothesis testing on error rates in factorial ANOVA designs under a variety of controlled conditions. The per comparison, per experiment, and experimentwise error rates were investigated for three hypothesis testing procedures. The specific conditions…

  3. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect

    Margalit, Danielle N.

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There

  4. 105-KE Isolation Barrier Leak Rate Acceptance Test Report

    SciTech Connect

    McCracken, K.J.

    1995-06-14

    This Acceptance Test Report (ATR) contains the completed and signed Acceptance Procedure (ATP) for the 105-KE Isolations Barrier Leak Rate Test. The Test Engineer`s log, the completed sections of the ATP in the Appendix for Repeat Testing (Appendix K), the approved WHC J-7s (Appendix H), the data logger files (Appendices T and U), and the post test calibration checks (Appendix V) are included.

  5. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  6. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  7. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  8. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  9. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  10. Aid to determining freeway metering rates and detecting loop errors

    SciTech Connect

    Nihan, N.L.

    1997-11-01

    A recent freeway congestion prediction study for the Washington Department of Transportation (WSDOT) found that the sum of storage rates over time, SumSR(t), for a freeway section was a better variable for determining the best upstream ramp metering rates than the storage rate for time interval t, SR(t), which is the current WSDOT criterion. (Use of the SumSR(t) variable for this purpose requires that the summation be started during a period of low density flows.) Another finding was that the SumSR(t) variable was a better detector of loop chattering errors than WSDOT`s current criterion, which misses chattering errors that occur at normal traffic volume levels. Since calculation of SumSR(t) is easily incorporated in the current WSDOT ramp metering algorithm, the writer recommends its use in future WSDOT freeway metering schemes.

  11. PVUSA procurement, acceptance, and rating practices for photovoltaic power plants

    SciTech Connect

    Dows, R.N.; Gough, E.J.

    1995-09-01

    This report is one in a series of PVUSA reports on PVUSA experiences and lessons learned at the demonstration sites in Davis and Kerman, California, and from participating utility host sites. During the course of approximately 7 years (1988--1994), 10 PV systems have been installed ranging from 20 kW to 500 kW. Six 20-kW emerging module technology arrays, five on universal project-provided structures and one turnkey concentrator, and four turnkey utility-scale systems (200 to 500 kW) were installed. PVUSA took a very proactive approach in the procurement of these systems. In the absence of established procurement documents, the project team developed a comprehensive set of technical and commercial documents. These have been updated with each successive procurement. Working closely with vendors after the award in a two-way exchange provided designs better suited for utility applications. This report discusses the PVUSA procurement process through testing and acceptance, and rating of PV turnkey systems. Special emphasis is placed on the acceptance testing and rating methodology which completes the procurement process by verifying that PV systems meet contract requirements. Lessons learned and recommendations are provided based on PVUSA experience.

  12. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  13. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  14. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  15. CMOS RAM cosmic-ray-induced-error-rate analysis

    NASA Technical Reports Server (NTRS)

    Pickel, J. C.; Blandford, J. T., Jr.

    1981-01-01

    A significant number of spacecraft operational anomalies are believed to be associated with cosmic-ray-induced soft errors in the LSI memories. Test programs using a cyclotron to simulate cosmic rays have established conclusively that many common commercial memory types are vulnerable to heavy-ion upset. A description is given of the methodology and the results of a detailed analysis for predicting the bit-error rate in an assumed space environment for CMOS memory devices. Results are presented for three types of commercially available CMOS 1,024-bit RAMs. It was found that the HM6508 is susceptible to single-ion induced latchup from argon and krypton ions. The HS6508 and HS6508RH and the CDP1821 apparently are not susceptible to single-ion induced latchup.

  16. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  17. Controlling Rater Stringency Error in Clinical Performance Rating: Further Validation of a Performance Rating Theory.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; And Others

    Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…

  18. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  19. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    conventional IMRT QA performance metrics (Gamma passing rates) and dose errors in anatomic regions-of-interest. The most common acceptance criteria and published actions levels therefore have insufficient, or at least unproven, predictive power for per-patient IMRT QA.

  20. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  1. The examination of commercial printing defects to assess common origin, batch variation, and error rate.

    PubMed

    LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K

    2010-01-01

    The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.

  2. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  3. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  4. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  5. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  6. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  7. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes. PMID:19571273

  8. Towards more complete specifications for acceptable analytical performance - a plea for error grid analysis.

    PubMed

    Krouwer, Jan S; Cembrowski, George S

    2011-07-01

    Abstract We examine limitations of common analytical performance specifications for quantitative assays. Specifications can be either clinical or regulatory. Problems with current specifications include specifying limits for only 95% of the results, having only one set of limits that demarcate no harm from minor harm, using incomplete models for total error, not accounting for the potential of user error, and not supplying sufficient protocol requirements. Error grids are recommended to address these problems as error grids account for 100% of the data and stratify errors into different severity categories. Total error estimation from a method comparison can be used to estimate the inner region of an error grid, but the outer region needs to be addressed using risk management techniques. The risk management steps, foreign to many in laboratory medicine, are outlined.

  9. A New Method for the Statistical Control of Rating Error in Performance Ratings.

    ERIC Educational Resources Information Center

    Bannister, Brendan D.; And Others

    1987-01-01

    To control for response bias in student ratings of college teachers, an index of rater error was used that was theoretically independent of actual performance. Partialing out the effects of this extraneous response bias enhanced validity, but partialing out overall effectiveness resulted in reduced convergent and discriminant validities.…

  10. Acceptance test procedure for the 105-KW isolation barrier leak rate

    SciTech Connect

    McCracken, K.J.

    1995-05-19

    This acceptance test procedure shall be used to: First establish a basin water loss rate prior to installation of the two isolation barriers between the main basin and the discharge chute in K-Basin West. Second, perform an acceptance test to verify an acceptable leakage rate through the barrier seals. This Acceptance Test Procedure (ATP) has been prepared in accordance with CM-6-1 EP 4.2, Standard Engineering Practices.

  11. Type I error rates for testing genetic drift with phenotypic covariance matrices: a simulation study.

    PubMed

    Prôa, Miguel; O'Higgins, Paul; Monteiro, Leandro R

    2013-01-01

    Studies of evolutionary divergence using quantitative genetic methods are centered on the additive genetic variance-covariance matrix (G) of correlated traits. However, estimating G properly requires large samples and complicated experimental designs. Multivariate tests for neutral evolution commonly replace average G by the pooled phenotypic within-group variance-covariance matrix (W) for evolutionary inferences, but this approach has been criticized due to the lack of exact proportionality between genetic and phenotypic matrices. In this study, we examined the consequence, in terms of type I error rates, of replacing average G by W in a test of neutral evolution that measures the regression slope between among-population variances and within-population eigenvalues (the Ackermann and Cheverud [AC] test) using a simulation approach to generate random observations under genetic drift. Our results indicate that the type I error rates for the genetic drift test are acceptable when using W instead of average G when the matrix correlation between the ancestral G and P is higher than 0.6, the average character heritability is above 0.7, and the matrices share principal components. For less-similar G and P matrices, the type I error rates would still be acceptable if the ratio between the number of generations since divergence and the effective population size (t/N(e)) is smaller than 0.01 (large populations that diverged recently). When G is not known in real data, a simulation approach to estimate expected slopes for the AC test under genetic drift is discussed.

  12. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions.

  13. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. PMID:27230694

  14. Effect of Repeated Evaluation and Repeated Exposure on Acceptability Ratings of Sentences

    ERIC Educational Resources Information Center

    Zervakis, Jennifer; Mazuka, Reiko

    2013-01-01

    This study investigated the effect of repeated evaluation and repeated exposure on grammatical acceptability ratings for both acceptable and unacceptable sentence types. In Experiment 1, subjects in the Experimental group rated multiple examples of two ungrammatical sentence types (ungrammatical binding and double object with dative-only verb),…

  15. The Interrelationships between Ratings of Speech and Facial Acceptability in Persons with Cleft Palate.

    ERIC Educational Resources Information Center

    Sinko, Garnet R.; Hedrick, Dona L.

    1982-01-01

    Thirty untrained young adult observers rated the speech and facial acceptablity of 20 speakers with cleft palate. The observers were reliable in rating both speech and facial acceptability. Judgments of facial acceptability were generally more positive, suggesting that speech is generally judged more negatively in speakers with cleft palate.…

  16. The interrelationships between ratings of speech and facial acceptability in persons with cleft palate.

    PubMed

    Sinko, G R; Hedrick, D L

    1982-09-01

    This study was conducted to determine (a) if untrained observers could reliably rate the speech and facial acceptability of young adults with clefts of the lip and/or palate; and (b) if there were differences between the ratings of speech acceptability and facial acceptability according to sex of observer, presentation mode, or speaker effect. Thirty untrained young adult observers rated the speech and facial acceptability of 20 speakers with cleft palate using a 7-point bipolar adjective scale. Judgments of speech acceptability were made from an auditory-only stimulus and then from a combined audio-visual stimulus. Judgments of facial acceptability were made from a visual-only stimulus and then from a combined audio-visual stimulus. Multivariate analysis of variance, Pearson product-moment correlation coefficients, and a posteriori multiple range tests were used for data analyses. Results indicated that untrained observers were reliable in rating both speech and facial acceptability (r .65-.97). The effects of speaker and interaction between speaker and presentation mode were significant at .01 levels of confidence. Judgments made of facial acceptability were generally more positive, leading to the conclusion that speech is generally judged more negatively in speakers with cleft palate, at least by untrained observers. The interaction between speech and facial acceptability was not significant.

  17. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  18. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  19. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  20. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2016-03-22

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella.

  1. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles

    PubMed Central

    Traverse, Charles C.; Ochman, Howard

    2016-01-01

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli. Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10−5 per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10−5 per nucleotide in rRNA of the endosymbiont Carsonella ruddii. The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10−5 per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  2. Effect of repeated evaluation and repeated exposure on acceptability ratings of sentences.

    PubMed

    Zervakis, Jennifer; Mazuka, Reiko

    2013-12-01

    This study investigated the effect of repeated evaluation and repeated exposure on grammatical acceptability ratings for both acceptable and unacceptable sentence types. In Experiment 1, subjects in the Experimental group rated multiple examples of two ungrammatical sentence types (ungrammatical binding and double object with dative-only verb), and two difficult to process sentence types [center-embedded (2) and garden path ambiguous relative], along with matched grammatical/non-difficult sentences, before rating a final set of experimental sentences. Subjects in the control group rated unrelated sentences during the exposure period before rating the experimental sentences. Subjects in the Experimental group rated both grammatical and ungrammatical sentences as more acceptable after repeated evaluation than subjects in the Control group. In Experiment 2, subjects answered a comprehension question after reading each sentence during the exposure period. Subjects in the experimental group rated garden path and center-embedded (1) sentences as higher in acceptability after comprehension exposure than subjects in the control group. The results are consistent with increased fluency of comprehension being misattributed as a change in acceptability.

  3. Effect of repeated evaluation and repeated exposure on acceptability ratings of sentences.

    PubMed

    Zervakis, Jennifer; Mazuka, Reiko

    2013-12-01

    This study investigated the effect of repeated evaluation and repeated exposure on grammatical acceptability ratings for both acceptable and unacceptable sentence types. In Experiment 1, subjects in the Experimental group rated multiple examples of two ungrammatical sentence types (ungrammatical binding and double object with dative-only verb), and two difficult to process sentence types [center-embedded (2) and garden path ambiguous relative], along with matched grammatical/non-difficult sentences, before rating a final set of experimental sentences. Subjects in the control group rated unrelated sentences during the exposure period before rating the experimental sentences. Subjects in the Experimental group rated both grammatical and ungrammatical sentences as more acceptable after repeated evaluation than subjects in the Control group. In Experiment 2, subjects answered a comprehension question after reading each sentence during the exposure period. Subjects in the experimental group rated garden path and center-embedded (1) sentences as higher in acceptability after comprehension exposure than subjects in the control group. The results are consistent with increased fluency of comprehension being misattributed as a change in acceptability. PMID:23179954

  4. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  5. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods.

  6. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water... Review and Approval § 300.20 Interim acceptance and review of Bonneville Power Administration rates. (a) Opportunity to comment. The Commission will publish in the Federal Register notice of any filing made...

  7. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  8. Topological quantum computing with a very noisy network and local error rates approaching one percent

    PubMed Central

    Nickerson, Naomi H.; Li, Ying; Benjamin, Simon C.

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems. PMID:23612297

  9. Bursty channel errors and the Viterbi decoder. [for high rate digit data channels

    NASA Technical Reports Server (NTRS)

    Ingels, F.

    1978-01-01

    Recent applications have developed for spread spectrum communications, hardware data transfer, high rate digital systems, etc. that use channels for which errors tend to occur in short bursts in addition to those at random, i.e., compound channels. Viterbi decoding algorithms are generally very good for random error channels but are not as efficient for burst errors or for compound channels. This paper presents the results of a computer simulation study of the performance of various Viterbi decoders when receiving data corrupted with burst and random errors on the same channel. Simulations were performed using hard-decision CPSK.

  10. Increasing Redundancy Exponentially Reduces Error Rates during Algorithmic Self-Assembly.

    PubMed

    Schulman, Rebecca; Wright, Christina; Winfree, Erik

    2015-06-23

    While biology demonstrates that molecules can reliably transfer information and compute, design principles for implementing complex molecular computations in vitro are still being developed. In electronic computers, large-scale computation is made possible by redundancy, which allows errors to be detected and corrected. Increasing the amount of redundancy can exponentially reduce errors. Here, we use algorithmic self-assembly, a generalization of crystal growth in which the self-assembly process executes a program for growing an object, to examine experimentally whether redundancy can analogously reduce the rate at which errors occur during molecular self-assembly. We designed DNA double-crossover molecules to algorithmically self-assemble ribbon crystals that repeatedly copy a short bitstring, and we measured the error rate when each bit is encoded by 1 molecule, or redundantly encoded by 2, 3, or 4 molecules. Under our experimental conditions, each additional level of redundancy decreases the bitwise error rate by a factor of roughly 3, with the 4-redundant encoding yielding an error rate less than 0.1%. While theory and simulation predict that larger improvements in error rates are possible, our results already suggest that by using sufficient redundancy it may be possible to algorithmically self-assemble micrometer-sized objects with programmable, nanometer-scale features.

  11. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  12. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  13. Construct and Predictive Validity of Social Acceptability: Scores From High School Teacher Ratings on the School Intervention Rating Form

    ERIC Educational Resources Information Center

    Harrison, Judith R.; State, Talida M.; Evans, Steven W.; Schamberg, Terah

    2016-01-01

    The purpose of this study was to evaluate the construct and predictive validity of scores on a measure of social acceptability of class-wide and individual student intervention, the School Intervention Rating Form (SIRF), with high school teachers. Utilizing scores from 158 teachers, exploratory factor analysis revealed a three-factor (i.e.,…

  14. Error rate of the Kane quantum computer controlled-NOT gate in the presence of dephasing

    SciTech Connect

    Fowler, Austin G.; Wellard, Cameron J.; Hollenberg, Lloyd C. L.

    2003-01-01

    We study the error rate of controlled-NOT (CNOT) operations in the Kane solid-state quantum computer architecture [B. Kane, Nature 393, 133 (1998)]. A spin Hamiltonian is used to describe the system. Dephasing is included as exponential decay of the off-diagonal elements of the system's density matrix. Using available spin-echo decay data, the CNOT error rate is estimated at {approx_equal}10{sup -3}.

  15. Total dose effect on soft error rate for dynamic metal-oxide-semiconductor memory cells

    NASA Technical Reports Server (NTRS)

    Benumof, Reuben

    1989-01-01

    A simple model for the soft error rate for dynamic metal-oxide-semiconductor random access memories due to normal galactic radiation was devised and then used to calculate the rate of decrease of the single-event-upset rate with total radiation dose. The computation shows that the decrease in the soft error rate is less than 10 percent per day if the shielding is 0.5 g/sq cm and the spacecraft is in a geosynchronous orbit. The decrease is considerably less in a polar orbiting device.

  16. 18 CFR 300.20 - Interim acceptance and review of Bonneville Power Administration rates.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Interim acceptance and review of Bonneville Power Administration rates. 300.20 Section 300.20 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER...

  17. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  18. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  19. Optimized filtering reduces the error rate in detecting genomic variants by short-read sequencing.

    PubMed

    Reumers, Joke; De Rijk, Peter; Zhao, Hui; Liekens, Anthony; Smeets, Dominiek; Cleary, John; Van Loo, Peter; Van Den Bossche, Maarten; Catthoor, Kirsten; Sabbe, Bernard; Despierre, Evelyn; Vergote, Ignace; Hilbush, Brian; Lambrechts, Diether; Del-Favero, Jurgen

    2012-01-01

    Distinguishing single-nucleotide variants (SNVs) from errors in whole-genome sequences remains challenging. Here we describe a set of filters, together with a freely accessible software tool, that selectively reduce error rates and thereby facilitate variant detection in data from two short-read sequencing technologies, Complete Genomics and Illumina. By sequencing the nearly identical genomes from monozygotic twins and considering shared SNVs as 'true variants' and discordant SNVs as 'errors', we optimized thresholds for 12 individual filters and assessed which of the 1,048 filter combinations were effective in terms of sensitivity and specificity. Cumulative application of all effective filters reduced the error rate by 290-fold, facilitating the identification of genetic differences between monozygotic twins. We also applied an adapted, less stringent set of filters to reliably identify somatic mutations in a highly rearranged tumor and to identify variants in the NA19240 HapMap genome relative to a reference set of SNVs. PMID:22178994

  20. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  1. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  2. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  3. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  4. Reducing bit-error rate with optical phase regeneration in multilevel modulation formats.

    PubMed

    Hesketh, Graham; Horak, Peter

    2013-12-15

    We investigate theoretically the benefits of using all-optical phase regeneration in a long-haul fiber optic link. We also introduce a design for a device capable of phase regeneration without phase-to-amplitude noise conversion. We simulate numerically the bit-error rate of a wavelength division multiplexed optical communication system over many fiber spans with periodic reamplification and compare the results obtained with and without phase regeneration at half the transmission distance when using the new design or an existing design. Depending on the modulation format, our results suggest that all-optical phase regeneration can reduce the bit-error rate by up to two orders of magnitude and that the amplitude preserving design offers a 50% reduction in bit-error rate relative to existing technology.

  5. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  6. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  7. Minimum attainable RMS attitude error using co-located rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1989-01-01

    A closed form analytical expression for the minimum attainable attitude error (as well as the error rate) in a flexible beam by feedback control using co-located rate sensors is announced. For simplicity, researchers consider a beam clamped at one end with an offset mass (antenna) at the other end where the controls and sensors are located. Both control moment generators and force actuators are provided. The results apply to any beam-like lattice-type truss, and provide the kind of performance criteria needed under CSI - Controls-Stuctures-Integrated optimization.

  8. Error Rate Reduction of Super-Resolution Near-Field Structure Disc

    NASA Astrophysics Data System (ADS)

    Kim, Jooho; Bae, Jaecheol; Hwang, Inoh; Lee, Jinkyung; Park, Hyunsoo; Chung, Chongsam; Kim, Hyunki; Park, Insik; Tominaga, Junji

    2007-06-01

    We report the error rate improvement of super-resolution near-field structure (super-RENS) write-once read-many (WORM) and read-only-memory (ROM) discs in a blue laser optical system [laser wavelength (λ), 405 nm; numerical aperture (NA), 0.85]. We prepared samples of higher carrier level WORM discs and wider pit width ROM discs. Using controlled equalization (EQ) characteristics and an adaptive write strategy and an advanced adaptive partial response maximum likelihood (PRML) technique, we obtained a bit error rate (bER) of 10-4 level. This result shows the high feasibility of super-RENS technology for practical use.

  9. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    SciTech Connect

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  10. Error rate performance of pulse position modulation schemes for indoor wireless optical communication

    NASA Astrophysics Data System (ADS)

    Azzam, Nazmy; Aly, Moustafa H.; AboulSeoud, A. K.

    2009-06-01

    Error rate performance of pulse position modulation (PPM) schemes for indoor wireless optical communication (WOC) applications is investigated. These schemes include traditional PPM and multiple PPM (MPPM). Study is unique in presenting and evaluating symbol error behaviour under wide range of design parameters such symbol length (L), number of chips per symbol (n), number of chips forms optical pulse (w). Effect of signal to noise ratio levels and operating bitrates on symbol error performance is also discussed. A comparison between studying modulation schemes is done. Relation with IrDA and IEEE 802.11 indoor WOC standardization is also investigated. Results indicate that PPM achieve great symbol error performance at reasonable signal to noise ratio and high bitrates with large symbol length.

  11. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  12. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  13. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  14. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  15. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  16. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  17. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  18. Pages from a Sociometric Notebook: An Analysis of Nomination and Rating Scale Measures of Acceptance, Rejection, and Social Preference.

    ERIC Educational Resources Information Center

    Bukowski, William M.; Sippola, Lorrie; Hoza, Betsy; Newcomb, Andrew F.

    2000-01-01

    Provides a conceptual and empirical analysis of the associations between the fundamental sociometric dimensions of acceptance, rejection, and social preference. Examines whether nomination and rating scale measures index the same constructs. Notes that sociometric ratings measure social preference, but can also yield indicators of acceptance and…

  19. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  20. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  1. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  2. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  3. Tissue pattern recognition error rates and tumor heterogeneity in gastric cancer.

    PubMed

    Potts, Steven J; Huff, Sarah E; Lange, Holger; Zakharov, Vladislav; Eberhard, David A; Krueger, Joseph S; Hicks, David G; Young, George David; Johnson, Trevor; Whitney-Miller, Christa L

    2013-01-01

    The anatomic pathology discipline is slowly moving toward a digital workflow, where pathologists will evaluate whole-slide images on a computer monitor rather than glass slides through a microscope. One of the driving factors in this workflow is computer-assisted scoring, which depends on appropriate selection of regions of interest. With advances in tissue pattern recognition techniques, a more precise region of the tissue can be evaluated, no longer bound by the pathologist's patience in manually outlining target tissue areas. Pathologists use entire tissues from which to determine a score in a region of interest when making manual immunohistochemistry assessments. Tissue pattern recognition theoretically offers this same advantage; however, error rates exist in any tissue pattern recognition program, and these error rates contribute to errors in the overall score. To provide a real-world example of tissue pattern recognition, 11 HER2-stained upper gastrointestinal malignancies with high heterogeneity were evaluated. HER2 scoring of gastric cancer was chosen due to its increasing importance in gastrointestinal disease. A method is introduced for quantifying the error rates of tissue pattern recognition. The trade-off between fully sampling tumor with a given tissue pattern recognition error rate versus randomly sampling a limited number of fields of view with higher target accuracy was modeled with a Monte-Carlo simulation. Under most scenarios, stereological methods of sampling-limited fields of view outperformed whole-slide tissue pattern recognition approaches for accurate immunohistochemistry analysis. The importance of educating pathologists in the use of statistical sampling is discussed, along with the emerging role of hybrid whole-tissue imaging and stereological approaches.

  4. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  5. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  8. Development and Validation of the Controller Acceptance Rating Scale (CARS): Results of Empirical Research

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Kerns, Karol; Bone, Randall

    2001-01-01

    The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.

  9. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  10. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.

    2015-06-01

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  11. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  12. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  13. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  14. Performance monitoring following total sleep deprivation: effects of task type and error rate.

    PubMed

    Renn, Ryan P; Cote, Kimberly A

    2013-04-01

    There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety.

  15. Wireless fetal heart rate monitoring in inpatient full-term pregnant women: testing functionality and acceptability.

    PubMed

    Boatin, Adeline A; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  16. Wireless Fetal Heart Rate Monitoring in Inpatient Full-Term Pregnant Women: Testing Functionality and Acceptability

    PubMed Central

    Boatin, Adeline A.; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  17. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  18. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Laboratory experiments performed at NASA Lewis measured the bit-error-rate (BER) degradation resulting from several types of amplitude response distortions. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory-simulated satellite channel. This paper presents the results of these experiments.

  19. Forward error correction and spatial diversity techniques for high-data-rate MILSATCOM over a slow-fading, nuclear-disturbed channel

    NASA Astrophysics Data System (ADS)

    Paul, Heywood I.; Meader, Charles B.; Lyons, Daniel A.; Ayers, David R.

    Forward error correction (FEC) and spatial diversity techniques are considered for improving the reliability of high-data-rate military satellite communication (MILSATCOM) over a slow-fading, nuclear-disturbed channel. Slow fading, which occurs when the channel decorrelation time is much greater than the transmitted symbol interval, is characterized by deep fades and, without special precautions, long bursts of errors over high-data-rate communication links. Using the widely accepted Defense Nuclear Agency (DNA) nuclear-scintillated channel model, the authors derive performance tradeoffs among required interleaver storage, FEC, spatial diversity, and link signal-to-noise ratio for differential binary phase shift keying (DBPSK) in the slow-fading environment. Spatial diversity is found to yield impressive gains without the large memory storage and transmission relay requirements associated with interleaving.

  20. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    PubMed

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  1. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  2. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  3. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  4. Influence of UAS Pilot Communication and Execution Delay on Controller's Acceptability Ratings of UAS-ATC Interactions

    NASA Technical Reports Server (NTRS)

    Vu, Kim-Phuong L.; Morales, Gregory; Chiappe, Dan; Strybel, Thomas Z.; Battiste, Vernol; Shively, Jay; Buker, Timothy J

    2013-01-01

    Successful integration of UAS in the NAS will require that UAS interactions with the air traffic management system be similar to interactions between manned aircraft and air traffic management. For example, UAS response times to air traffic controller (ATCo) clearances should be equivalent to those that are currently found to be acceptable with manned aircraft. Prior studies have examined communication delays with manned aircraft. Unfortunately, there is no analogous body of research for UAS. The goal of the present study was to determine how UAS pilot communication and execution delays affect ATCos' acceptability ratings of UAS pilot responses when the UAS is operating in the NAS. Eight radar-certified controllers managed traffic in a modified ZLA sector with one UAS flying in it. In separate scenarios, the UAS pilot verbal communication and execution delays were either short (1.5 s) or long (5 s) and either constant or variable. The ATCo acceptability of UAS pilot communication and execution delays were measured subjectively via post trial ratings. UAS verbal pilot communication delay, were rated as acceptable 92% of the time when the delay was short. This acceptability level decreased to 64% when the delay was long. UAS pilot execution delay had less of an influence on ATCo acceptability ratings in the present stimulation. Implications of these findings for UAS in the NAS integration are discussed.

  5. Reproduced waveform and bit error rate analysis of a patterned perpendicular medium R/W channel

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Saito, H.; Aoi, H.; Muraoka, H.; Nakamura, Y.

    2005-05-01

    Patterned media were investigated as candidates for 1Tb/in.2 recording. In the case of recording with a patterned medium, the noise due to the irregularity of the pattern has to be taken into account instead of the medium noise due to grains. The bit error rate was studied for both continuous and patterned media to evaluate the advantages of patterning. The bit aspect ratio (BPI/TPI) was set to two for the patterned media and four for the continuous medium. The bit error rate (BER), calculated with a PR(1,1) channel simulator, indicated that for both double layered and single layered patterned media an improvement of the BER over conventional continuous media is expected when the patterning jitter is controlled to within 8%. When the system noise is large the BER of single layered patterned media deteriorates more rapidly than that of double layered media, due to the higher boost in the PR(1,1) channel. It was found that making the land length to bit length ratio large was quite effective at improving BER.

  6. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  7. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  8. Adaptive planning strategy for high dose rate prostate brachytherapy—a simulation study on needle positioning errors.

    PubMed

    Borot de Battisti, M; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A

    2016-03-01

    The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-compatible robot is currently under development at the University Medical Center Utrecht (UMCU). This robotic device taps the needle in a divergent way from a single rotation point into the prostate. With this setup, it is warranted to deliver the irradiation dose by successive insertions of the needle. Although robot-assisted needle placement is expected to be more accurate than manual template-guided insertion, needle positioning errors may occur and are likely to modify the pre-planned dose distribution.In this paper, we propose a dose plan adaptation strategy for HDR prostate brachytherapy with feedback on the needle position: a dose plan is made at the beginning of the interventional procedure and updated after each needle insertion in order to compensate for possible needle positioning errors. The introduced procedure can be used with the single needle MR-compatible robot developed at the UMCU. The proposed feedback strategy was tested by simulating complete HDR procedures with and without feedback on eight patients with different numbers of needle insertions (varying from 4 to 12). In of the cases tested, the number of clinically acceptable plans obtained at the end of the procedure was larger with feedback compared to the situation without feedback. Furthermore, the computation time of the feedback between each insertion was below 100 s which makes it eligible for intra-operative use.

  9. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.

  10. [Can new technologies reduce the rate of medications errors in adult intensive care?].

    PubMed

    Benoit, E; Beney, J

    2011-09-01

    In the intensive care environment, technology is omnipresent to ensure the monitoring and the administration of critical drugs to unstable patients. Since the early 2000's computerized physician order entry (CPOE), bar code assisted medication administration (BCMA), "smart" infusion pumps (SIP), electronic medication administration record (eMAR) and automated dispensing systems (ADS) have been recommended to reduce medication errors. About ten years later, their implementation rises but remains modest. The objective of this study is to determine the impact of these technologies on the rate of medication errors (ME) in adult intensive care. CPOE allows a strong and significant reduction of ME, especially the least critical ones. Only when adding a clinical decision support system (CDSS), CPOE could allow a reduction of serious errors. Used alone, it could even increase them. The available studies do not have the sufficient power to demonstrate the benefits of SIP or BCMA on ME. However, these devices, reveal practices, such as overriding of alerts. Power or methodology problems and conflicting results do not allow to determine the ability of ADS to reduce the incidence of ME in the intensive care. The studies, investigating these technologies, are not very recent, of limited number and present lacks in their methodology, which does not allow to determine whether they can reduce the incidence of MEs in the adult intensive care. Currently, the benefits appear to be limited which may be explained by the complexity of their integration into the care process. Special attention should be given to the communication between caregivers, the human-computer interface and the caregivers' training.

  11. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  12. Ancient documents bleed-through evaluation and its application for predicting OCR error rates

    NASA Astrophysics Data System (ADS)

    Rabeux, V.; Journet, N.; Domenger, J. P.

    2011-01-01

    This article presents a way to evaluate the bleed-through defect on very old document images. We design measures to quantify and evaluate the verso ink bleeding through the paper onto the recto side. Measuring the bleed-through defect alows us to perform statistical analysis that are able to predict the feasibility of different post-scan tasks. In this article we choose to illustrate our measures by creating two OCR error rate predicting models based bleed-through evaluation. Two models are proposed, one for Abbyy FineReader * which is a very power-full commercial OCR and OCRopus † which is sponsored by Google. Both prediction models appears to be very accurate when calculating various statistic indicators.

  13. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  14. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  15. Error rates in a clinical data repository: lessons from the transition to electronic data transfer—a descriptive study

    PubMed Central

    Hong, Matthew K H; Yao, Henry H I; Pedersen, John S; Peters, Justin S; Costello, Anthony J; Murphy, Declan G; Hovens, Christopher M; Corcoran, Niall M

    2013-01-01

    Objective Data errors are a well-documented part of clinical datasets as is their potential to confound downstream analysis. In this study, we explore the reliability of manually transcribed data across different pathology fields in a prostate cancer database and also measure error rates attributable to the source data. Design Descriptive study. Setting Specialist urology service at a single centre in metropolitan Victoria in Australia. Participants Between 2004 and 2011, 1471 patients underwent radical prostatectomy at our institution. In a large proportion of these cases, clinicopathological variables were recorded by manual data-entry. In 2011, we obtained electronic versions of the same printed pathology reports for our cohort. The data were electronically imported in parallel to any existing manual entry record enabling direct comparison between them. Outcome measures Error rates of manually entered data compared with electronically imported data across clinicopathological fields. Results 421 patients had at least 10 comparable pathology fields between the electronic import and manual records and were selected for study. 320 patients had concordant data between manually entered and electronically populated fields in a median of 12 pathology fields (range 10–13), indicating an outright accuracy in manually entered pathology data in 76% of patients. Across all fields, the error rate was 2.8%, while individual field error ranges from 0.5% to 6.4%. Fields in text formats were significantly more error-prone than those with direct measurements or involving numerical figures (p<0.001). 971 cases were available for review of error within the source data, with figures of 0.1–0.9%. Conclusions While the overall rate of error was low in manually entered data, individual pathology fields were variably prone to error. High-quality pathology data can be obtained for both prospective and retrospective parts of our data repository and the electronic checking of source

  16. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  17. Asymptotic error-rate analysis of FSO links using transmit laser selection over gamma-gamma atmospheric turbulence channels with pointing errors.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2012-01-30

    Since free-space optical (FSO) systems are usually installed on high buildings and building sway may cause vibrations in the transmitted beam, an unsuitable alignment between transmitter and receiver together with fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence can severely degrade the performance of optical wireless communication systems. In this paper, asymptotic bit error-rate (BER) performance for FSO communication systems using transmit laser selection over atmospheric turbulence channels with pointing errors is analyzed. Novel closed-form asymptotic expressions are derived when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions (weak to strong), following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results provide significant insight into the impact of various system and channel parameters, showing that the diversity order is independent of the pointing error when the equivalent beam radius at the receiver is at least 2(min{α,β})(1/2) times the value of the pointing error displacement standard deviation at the receiver. Moreover, since proper FSO transmission requires transmitters with accurate control of their beamwidth, asymptotic expressions are used to find the optimum beamwidth that minimizes the BER at different turbulence conditions. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results, showing that asymptotic expressions here obtained lead to simple bounds on the bit error probability that get tighter over a wider range of signal-to-noise ratio (SNR) as the turbulence strength increases.

  18. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics.

    PubMed

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods.

  19. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  20. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    NASA Astrophysics Data System (ADS)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  1. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  2. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES

    PubMed Central

    Peña, Edsel A.; Habiger, Joshua D.; Wu, Wensong

    2014-01-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini–Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p-values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p-value based procedures whose theoretical validity is contingent on each of these p-value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional “large M, small n” data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology. PMID:25018568

  3. Error in estimation of rate and time inferred from the early amniote fossil record and avian molecular clocks.

    PubMed

    van Tuinen, Marcel; Hadly, Elizabeth A

    2004-08-01

    The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales.

  4. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  5. Scintillation index and bit error rate of hollow Gaussian beams in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Qiao, Na; Zhang, Bin; Pan, Pingping; Dan, Youquan

    2011-06-01

    Based on the Huygens-Fresnel principle and Rytov method, the on-axis scintillation index is derived for hollow Gaussian beams (HGBs) in weak turbulence. The relationship between bit error rate (BER) and scintillation index is found by only considering the effect of atmosphere turbulence based on the probability distribution of intensity fluctuation, and the expression of the BER is obtained. Furthermore, the scintillation and the BER properties of HGBs in turbulence are discussed in detail. The results show that the scintillation index and BER of HGBs depend on the propagation length, the structure constant of the refractive index fluctuations of turbulence, the wavelength, the beam order and the waist width of the fundamental Gaussian beam. The scintillation index, increasing with the propagation length in turbulence, for the HGB with higher beam order increases more slowly. The BER of the HGBs increases rapidly against the propagation length in turbulence. For propagating the same distance, the BER of the fundamental Gaussian beam is the greatest, and that of the HGB with higher order is smaller.

  6. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  7. Rate of Medical Errors in Affiliated Hospitals of Mazandaran University of Medical Sciences

    PubMed Central

    Saravi, Benyamin Mohseni; Mardanshahi, Alireza; Ranjbar, Mansour; Siamian, Hasan; Azar, Masoud Shayeste; Asghari, Zolikah; Motamed, Nima

    2015-01-01

    Introduction: Health care organizations are highly specialized and complex. Thus we may expect the adverse events will inevitably occur. Building a medical error reporting system to analyze the reported preventable adverse events and learn from their results can help to prevent the repeat of these events. The medical errors which were reported to the Clinical Governance’s office of Mazandaran University of Medical Sciences (MazUMS) in years 2011-2012 were analyzed. Methods and Materials: This is a descriptive retrospective study in which 18 public hospitals were participated. The instrument of data collection was checklist that was designed by the Ministry of Health of Iran. Variables were type of hospital, unit of hospital, season, severity of event and type of error. The data were analyzed with SPSS software. Results: Of 317966 admissions 182 cases, about 0.06%, medical error reported of which most of the reports (%51.6) were from non- teaching hospitals. Among various units of hospital, the highest frequency of medical error was related to surgical unit (%42.3). The frequency of medical error according to the type of error was also evaluated of which the highest frequency was related to inappropriate and no care (totally 37%) and medication error 28%. We also analyzed the data with respect to the effect of the error on a patient of which the highest frequency was related to minor effect (44.5%). Conclusion: The results showed that a wide variety of errors. Encourage and revision of the reporting process will be result to know more data for prevention of them. PMID:25870528

  8. Exact error rate analysis of free-space optical communications with spatial diversity over Gamma-Gamma atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin

    2016-02-01

    The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.

  9. Effect of audio bandwidth and bit error rate on PCM, ADPCM and LPC speech coding algorithm intelligibility

    NASA Astrophysics Data System (ADS)

    McKinley, Richard L.; Moore, Thomas J.

    1987-02-01

    The effects of audio bandwidth and bit error rate on speech intelligibility of voice coders in noise are described and quantified. Three different speech coding techniques were investigated, pulse code modulation (PCM), adaptive differential pulse code modulation (ADPCM), and linear predictive coding (LPC). Speech intelligibility was measured in realistic acoustic noise environs by a panel of 10 subjects performing the Modified Rhyme Test. Summary data is presented along with planned future research in optimization of audio bandwidth vs bit error rate tradeoff for best speech intelligibility.

  10. Estimation of genotyping error rate from repeat genotyping, unintentional recaptures and known parent-offspring comparisons in 16 microsatellite loci for brown rockfish (Sebastes auriculatus).

    PubMed

    Hess, Maureen A; Rhydderch, James G; LeClair, Larry L; Buckley, Raymond M; Kawase, Mitsuhiro; Hauser, Lorenz

    2012-11-01

    Genotyping errors are present in almost all genetic data and can affect biological conclusions of a study, particularly for studies based on individual identification and parentage. Many statistical approaches can incorporate genotyping errors, but usually need accurate estimates of error rates. Here, we used a new microsatellite data set developed for brown rockfish (Sebastes auriculatus) to estimate genotyping error using three approaches: (i) repeat genotyping 5% of samples, (ii) comparing unintentionally recaptured individuals and (iii) Mendelian inheritance error checking for known parent-offspring pairs. In each data set, we quantified genotyping error rate per allele due to allele drop-out and false alleles. Genotyping error rate per locus revealed an average overall genotyping error rate by direct count of 0.3%, 1.5% and 1.7% (0.002, 0.007 and 0.008 per allele error rate) from replicate genotypes, known parent-offspring pairs and unintentionally recaptured individuals, respectively. By direct-count error estimates, the recapture and known parent-offspring data sets revealed an error rate four times greater than estimated using repeat genotypes. There was no evidence of correlation between error rates and locus variability for all three data sets, and errors appeared to occur randomly over loci in the repeat genotypes, but not in recaptures and parent-offspring comparisons. Furthermore, there was no correlation in locus-specific error rates between any two of the three data sets. Our data suggest that repeat genotyping may underestimate true error rates and may not estimate locus-specific error rates accurately. We therefore suggest using methods for error estimation that correspond to the overall aim of the study (e.g. known parent-offspring comparisons in parentage studies).

  11. Step angles to reduce the north-finding error caused by rate random walk with fiber optic gyroscope.

    PubMed

    Wang, Qin; Xie, Jun; Yang, Chuanchuan; He, Changhong; Wang, Xinyue; Wang, Ziyu

    2015-10-20

    We study the relationship between the step angles and the accuracy of north finding with fiber optic gyroscopes. A north-finding method with optimized step angles is proposed to reduce the errors caused by rate random walk (RRW). Based on this method, the errors caused by both angle random walk and RRW are reduced by increasing the number of positions. For when the number of positions is even, we proposed a north-finding method with symmetric step angles that can reduce the error caused by RRW and is not affected by the azimuth angles. Experimental results show that, compared with the traditional north-finding method, the proposed methods with the optimized step angles and the symmetric step angles can reduce the north-finding errors by 67.5% and 62.5%, respectively. The method with symmetric step angles is not affected by the azimuth angles and can offer consistent high accuracy for any azimuth angles.

  12. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  13. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. GaAlAs laser temperature effects on the BER performance of a gigabit PCM fiber system. [Bit Error Rate

    NASA Technical Reports Server (NTRS)

    Eng, S. T.; Bergman, L. A.

    1982-01-01

    The performance of a gigabit pulse-code modulation fiber system has been investigated as a function of laser temperature. The bit error rate shows an improvement for temperature in the range of -15 C to -35 C. A tradeoff seems possible between relaxation oscillation, rise time, and signal-to-noise ratio.

  15. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  16. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations.

  17. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations. PMID:26125336

  18. Errors in the estimation of arterial wall shear rates that result from curve fitting of velocity profiles.

    PubMed

    Lou, Z; Yang, W J; Stein, P D

    1993-01-01

    An analysis was performed to determine the error that results from the estimation of the wall shear rates based on linear and quadratic curve-fittings of the measured velocity profiles. For steady, fully developed flow in a straight vessel, the error for the linear method is linearly related to the distance between the probe and the wall, dr1, and the error for the quadratic method is zero. With pulsatile flow, especially a physiological pulsatile flow in a large artery, the thickness of the velocity boundary layer, delta is small, and the error in the estimation of wall shear based on curve fitting is much higher than that with steady flow. In addition, there is a phase lag between the actual shear rate and the measured one. In oscillatory flow, the error increases with the distance ratio dr1/delta and, for a quadratic method, also with the distance ratio dr2/dr1, where dr2 is the distance of the second probe from the wall. The quadratic method has a distinct advantage in accuracy over the linear method when dr1/delta < 1, i.e. when the first velocity point is well within the boundary layer. The use of this analysis in arterial flow involves many simplifications, including Newtonian fluid, rigid walls, and the linear summation of the harmonic components, and can provide more qualitative than quantitative guidance. PMID:8478343

  19. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  20. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets.

    PubMed

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  1. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  2. Rater Stringency Error in Performance Rating: A Contrast of Three Models.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; Cason, Carolyn L.

    The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…

  3. A Comparison of Type I Error Rates of Alpha-Max with Established Multiple Comparison Procedures.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…

  4. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  5. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  6. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers

  7. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  8. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  9. Indirect measurement of a laser communications bit-error-rate reduction with low-order adaptive optics.

    PubMed

    Tyson, Robert K; Canning, Douglas E

    2003-07-20

    In experimental measurements of the bit-error rate for a laser communication system, we show improved performance with the implementation of low-order (tip/tilt) adaptive optics in a free-space link. With simulated atmospheric tilt injected by a conventional piezoelectric tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiment replicated a monostatic propagation with a cooperative wave front beacon at the receiver. Owing to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and indirect measurement of the bit-error rate before correction and after correction, with a theoretical prediction.

  10. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  11. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  12. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  13. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    PubMed

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27264206

  14. Assessing XCTD Fall Rate Errors using Concurrent XCTD and CTD Profiles in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Millar, J.; Gille, S. T.; Sprintall, J.; Frants, M.

    2010-12-01

    Refinements in the fall rate equation for XCTDs are not as well understood as those for XBTs, due in part to the paucity of concurrent and collocated XCTD and CTD profiles. During February and March 2010, the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) conducted 31 collocated 1000-meter XCTD and CTD casts in the Drake Passage. These XCTD/CTD profile pairs are closely matched in space and time, with a mean distance between casts of 1.19 km and a mean lag time of 39 minutes. The profile pairs are well suited to address the XCTD fall rate problem specifically in higher latitude waters, where existing fall rate corrections have rarely been assessed. Many of these XCTD/CTD profile pairs reveal an observable depth offset in measurements of both temperature and conductivity. Here, the nature and extent of this depth offset is evaluated.

  15. An approach for reducing the error rate in automated lung segmentation.

    PubMed

    Gill, Gurman; Beichel, Reinhard R

    2016-09-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855±0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  16. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  17. Evaluation of write error rate for voltage-driven dynamic magnetization switching in magnetic tunnel junctions with perpendicular magnetization

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2016-01-01

    We investigated the write error rate (WER) for voltage-driven dynamic switching in magnetic tunnel junctions with perpendicular magnetization. We observed a clear oscillatory behavior of the switching probability with respect to the duration of pulse voltage, which reveals the precessional motion of magnetization during voltage application. We experimentally demonstrated WER as low as 4 × 10-3 at the pulse duration corresponding to a half precession period (˜1 ns). The comparison between the results of the experiment and simulation based on a macrospin model shows a possibility of ultralow WER (<10-15) under optimum conditions. This study provides a guideline for developing practical voltage-driven spintronic devices.

  18. Packet error rate analysis of OOK, DPIM, and PPM modulation schemes for ground-to-satellite laser uplink communications.

    PubMed

    Jiang, Yijun; Tao, Kunyu; Song, Yiwei; Fu, Sen

    2014-03-01

    Performance of on-off keying (OOK), digital pulse interval modulation (DPIM), and pulse position modulation (PPM) schemes are researched for ground-to-satellite laser uplink communications. Packet error rates of these modulation systems are compared, with consideration of the combined effect of intensity fluctuation and beam wander. Based on the numerical results, performances of different modulation systems are discussed. Optimum divergence angle and transmitted beam radius of different modulation systems are indicated and the relations of the transmitted laser power to them are analyzed. This work can be helpful for modulation scheme selection and system design in ground-to-satellite laser uplink communications.

  19. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  20. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  1. Adjustment on the Type I Error Rate for a Clinical Trial Monitoring for both Intermediate and Primary Endpoints

    PubMed Central

    Halabi, Susan

    2013-01-01

    In many clinical trials, a single endpoint is used to answer the primary question and forms the basis for monitoring the experimental therapy. Many trials are lengthy in duration and investigators are interested in using an intermediate endpoint for an accelerated approval, but will rely on the primary endpoint (such as, overall survival) for the full approval of the drug by the Food and Drug Administration. We have designed a clinical trial where both intermediate (progression-free survival, (PFS)) and primary endpoints (overall survival, (OS)) are used for monitoring the trial so the overall type I error rate is preserved at the pre-specified alpha level of 0.05. A two-stage procedure is used. In the first stage, the Bonferroni correction was used where the global type I error rate was allocated to each of the endpoints. In the next stage, the O’Brien-Fleming approach was used to design the boundary for the interim and final analysis for each endpoint. Data were generated assuming several parametric copulas with exponential marginals. Different degrees of dependence, as measured by Kendall’s τ, between OS and PFS were assumed: 0 (independence) 0.3, 0.5 and 0.70. This approach is applied to an example in a prostate cancer trial. PMID:24466469

  2. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  3. Direct impact analysis of multi-leaf collimator leaf position errors on dose distributions in volumetric modulated arc therapy: a pass rate calculation between measured planar doses with and without the position errors

    NASA Astrophysics Data System (ADS)

    Tatsumi, D.; Hosono, M. N.; Nakada, R.; Ishii, K.; Tsutsumi, S.; Inoue, M.; Ichida, T.; Miki, Y.

    2011-10-01

    We propose a new method for analyzing the direct impact of multi-leaf collimator (MLC) leaf position errors on dose distributions in volumetric modulated arc therapy (VMAT). The technique makes use of the following processes. Systematic leaf position errors are generated by directly changing a leaf offset in a linac controller; dose distributions are measured by a two-dimensional diode array; pass rates of the dose difference between measured planar doses with and without the position errors are calculated as a function of the leaf position error. Three different treatment planning systems (TPSs) were employed to create VMAT plans for five prostate cancer cases and the pass rates were compared between the TPSs under various leaf position errors. The impact of the leaf position errors on dose distributions depended upon the final optimization result from each TPS, which was explained by the correlation between the dose error and the average leaf gap width. The presented method determines leaf position tolerances for VMAT delivery for each TPS, which may facilitate establishing a VMAT quality assurance program in a radiotherapy facility. This work was presented in part at the 52nd Annual Meeting of the American Society for Therapeutic Radiology and Oncology in San Diego on 1 November 2010.

  4. Bit-Error-Rate Evaluation of Super-Resolution Near-Field Structure Read-Only Memory Discs with Semiconductive Material InSb

    NASA Astrophysics Data System (ADS)

    Nakai, Kenya; Ohmaki, Masayuki; Takeshita, Nobuo; Hyot, Bérangère; André, Bernard; Poupinet, Ludovic

    2010-08-01

    Bit-error-rate (bER) evaluation using hardware (H/W) evaluation system is described for super-resolution near-field structure (super-RENS) read-only-memory (ROM) discs fabricated with a semiconductor material, In-Sb, as the super-resolution active layer. bER on the order of 10-5 below a criterion of 3.0×10-4 is obtained with the super-RENS ROM discs having random pattern data including a minimum pit length of 80 nm in partial response maximum likelihood of the (1,2,2,1) type. The disc tilt, focus offset, and read power offset margins based on bER of readout signals are measured for the super-RENS ROM discs and are almost acceptable for practical use. Significant improvement of read stability up to 40,000 cycles realized by introducing the ZrO2 interface layer is confirmed using the H/W evaluation system.

  5. Social Acceptance; A Possible Mediator in the Association between Socio-Economic Deprivation and Under-18 Pregnancy Rates?

    ERIC Educational Resources Information Center

    Smith, Debbie Michelle; Roberts, Ron

    2009-01-01

    This study examines the social acceptance of young (under-18) pregnancy by assessing people's acceptance of young pregnancy and abortion in relation to deprivation. A cross-sectional survey design was conducted in two relatively affluent and two relatively deprived local authorities in London (n=570). Contrary to previous findings, participants…

  6. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  7. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  8. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  9. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

    NASA Astrophysics Data System (ADS)

    Li, Zexian; Latva-aho, Matti

    2004-12-01

    Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  10. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  11. The acceptance rate of young wasps by alien colonies depends on colony developmental stages in the swarm-founding wasp, Polybia paulista von ihering (Hymenoptera: Vespidae).

    PubMed

    Kudô, Kazuyuki; Hozumi, Satoshi; Mateus, Sidnei; Zucchi, Ronaldo

    2010-01-01

    In social insects, newly emerged individuals learn the colony-specific chemical label from their natal comb shortly after their emergence. These labels help to identify each individual's colony of origin and are used as a recognition template against which individuals can discriminate nestmates from non-nestmates. Our previous studies with Polybia paulista von Ihering support this general pattern, and the acceptance rate of young female and male wasps decreased as a function of their age. Our study also showed in P. paulista that more than 90% of newly emerged female wasps might be accepted by conspecific unrelated colonies. However, it has not been investigated whether the acceptance rate of newly emerged female wasps depends on colony developmental stage of recipient colonies. We introduced newly emerged female wasps of P. paulista into different colony developmental stags of recipient colonies, i.e., worker-producing and male-producing colonies. We found that the acceptance rate of newly emerged female wasps by alien colonies was pretty lower by male-producing colonies than worker-producing colonies. This is the first study to show that the acceptance rate of young female wasps depends on stages of recipient colonies.

  12. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  13. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  14. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  15. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  16. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  17. Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences.

    PubMed

    Viswesvaran, Chockalingam; Schmidt, Frank L; Ones, Deniz S

    2005-01-01

    A database integrating 90 years of empirical studies reporting intercorrelations among rated job performance dimensions was used to test the hypothesis of a general factor in job performance. After controlling for halo error and 3 other sources of measurement error, there remained a general factor in job performance ratings at the construct level accounting for 60% of total variance. Construct-level correlations among rated dimensions of job performance were substantially inflated by halo for both supervisory (33%) and peer (63%) intrarater correlations. These findings have important implications for the measurement of job performance and for theories of job performance.

  18. Are acceptance rates of a national preventive home visit programme for older people socially imbalanced?: a cross sectional study in Denmark

    PubMed Central

    2012-01-01

    Background Preventive home visits are offered to community dwelling older people in Denmark aimed at maintaining their functional ability for as long as possible, but only two thirds of older people accept the offer from the municipalities. The purpose of this study is to investigate 1) whether socioeconomic status was associated with acceptance of preventive home visits among older people and 2) whether municipality invitational procedures for the preventive home visits modified the association. Methods The study population included 1,023 community dwelling 80-year-old individuals from the Danish intervention study on preventive home visits. Information on preventive home visit acceptance rates was obtained from questionnaires. Socioeconomic status was measured by financial assets obtained from national registry data, and invitational procedures were identified through the municipalities. Logistic regression analyses were used, adjusted by gender. Results Older persons with high financial assets accepted preventive home visits more frequently than persons with low assets (adjusted OR = 1.5 (CI95%: 1.1-2.0)). However, the association was attenuated when adjusted by the invitational procedures. The odds ratio for accepting preventive home visits was larger among persons with low financial assets invited by a letter with a proposed date than among persons with high financial assets invited by other procedures, though these estimates had wide confidence intervals. Conclusion High socioeconomic status was associated with a higher acceptance rate of preventive home visits, but the association was attenuated by invitational procedures. The results indicate that the social inequality in acceptance of publicly offered preventive services might decrease if municipalities adopt more proactive invitational procedures. PMID:22656647

  19. Packet error rate analysis of digital pulse interval modulation in intersatellite optical communication systems with diversified wavefront deformation.

    PubMed

    Zhu, Jin; Wang, Dayan; Xie, Wanqing

    2015-02-20

    Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.

  20. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  1. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  2. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  3. A Case Study using Token Reward on Oral Reading Rate, Error Reduction, and Comprehension of a Reading Deficient Child.

    ERIC Educational Resources Information Center

    Ervin, Tommye A.; Fox, Paul A.

    This case study reports the use of token reinforcement in remedial reading instruction with an eleven-year-old boy from rural Appalachia. During phase one, tokens were given for reading 50-word passages without error; token value was contingent upon the number of attempts necessary to read without error. During phase two, words missed in phase one…

  4. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  5. Sensory evaluation ratings and melting characteristics show that okra gum is an acceptable milk-fat ingredient substitute in chocolate frozen dairy dessert.

    PubMed

    Romanchik-Cerpovicz, Joelle E; Costantino, Amanda C; Gunn, Laura H

    2006-04-01

    Reducing dietary fat intake may lower the risk of developing coronary heart disease. This study examined the feasibility of substituting okra gum for 25%, 50%, 75%, or 100% milk fat in frozen chocolate dairy dessert. Fifty-six consumers evaluated the frozen dairy desserts using a hedonic scale. Consumers rated color, smell, texture, flavor, aftertaste, and overall acceptability characteristics of all products as acceptable. All ratings were similar among the products except for the aftertaste rating, which was significantly lower for chocolate frozen dairy dessert containing 100% milk-fat replacement with okra gum compared with the control (0% milk-fat replacement) (P<0.05). Whereas melting points of all products were similar, melting rates slowed significantly as milk-fat replacement with okra gum increased, suggesting that okra gum may increase the stability of frozen dairy desserts (P<0.05). Overall, this study shows that okra gum is an acceptable milk-fat ingredient substitute in chocolate frozen dairy dessert.

  6. Water-balance uncertainty in Honduras: a limits-of-acceptability approach to model evaluation using a time-variant rating curve

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.

    2009-04-01

    The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.

  7. Comparison of four different mobile devices for measuring heart rate and ECG with respect to aspects of usability and acceptance by older people.

    PubMed

    Ehmen, Hilko; Haesner, Marten; Steinke, Ines; Dorn, Mario; Gövercin, Mehmet; Steinhagen-Thiessen, Elisabeth

    2012-05-01

    In the area of product design and usability, most products are developed for the mass-market by technically oriented designers and developers for use by persons who themselves are also technically adept by today's standards. The demands of older people are commonly not given sufficient consideration within the early developmental process. In the present study, the usability and acceptability of four different devices meant to be worn for the measurement of heart rate or ECG were analyzed on the basis of qualitative subjective user ratings and structured interviews of twelve older participants. The data suggest that there was a relatively high acceptance concerning these belts by older adults but none of the four harnesses was completely usable. Especially problematic to the point of limiting satisfaction among older subjects were problems encountered while adjusting the length of the belt and/or closing the locking mechanism. The two devices intended for dedicated heart rate recording yielded the highest user ratings for design, and were clearly preferred for extended wearing time. Yet for all the devices participants identified several important deficiencies in their design, as well as suggestions for improvement. We conclude that the creation of an acceptable monitoring device for older persons requires designers and developers to consider the special demands and abilities of the target group.

  8. Detecting Unit of Analysis Problems in Nested Designs: Statistical Power and Type I Error Rates of the "F" Test for Groups-within-Treatments Effects.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Dickinson, Wendy B.

    1996-01-01

    Empirical estimates of the power and Type I error rate of the test of the classrooms-within-treatments effect in the nested analysis of variance approach are provided for a variety of nominal alpha levels and a range of classroom effect sizes and research designs. (SLD)

  9. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-25

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  10. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  11. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  12. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  13. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

  14. American Recovery and Reinvestment Act of 2009. Interim Report on Customer Acceptance, Retention, and Response to Time-Based Rates from the Consumer Behavior Studies

    SciTech Connect

    Cappers, Peter; Hans, Liesel; Scheer, Richard

    2015-06-01

    Time-based rate programs1, enabled by utility investments in advanced metering infrastructure (AMI), are increasingly being considered by utilities as tools to reduce peak demand and enable customers to better manage consumption and costs. There are several customer systems that are relatively new to the marketplace and have the potential for improving the effectiveness of these programs, including in-home displays (IHDs), programmable communicating thermostats (PCTs), and web portals. Policy and decision makers are interested in more information about customer acceptance, retention, and response before moving forward with expanded deployments of AMI-enabled new rates and technologies. Under the Smart Grid Investment Grant Program (SGIG), the U.S. Department of Energy (DOE) partnered with several utilities to conduct consumer behavior studies (CBS). The goals involved applying randomized and controlled experimental designs for estimating customer responses more precisely and credibly to advance understanding of time-based rates and customer systems, and provide new information for improving program designs, implementation strategies, and evaluations. The intent was to produce more robust and credible analysis of impacts, costs, benefits, and lessons learned and assist utility and regulatory decision makers in evaluating investment opportunities involving time-based rates. To help achieve these goals, DOE developed technical guidelines to help the CBS utilities estimate customer acceptance, retention, and response more precisely.

  15. Rates of assay success and genotyping error when single nucleotide polymorphism genotyping in non-model organisms: a case study in the Antarctic fur seal.

    PubMed

    Hoffman, J I; Tucker, R; Bridgett, S J; Clark, M S; Forcada, J; Slate, J

    2012-09-01

    Although single nucleotide polymorphisms (SNPs) are increasingly being recognized as powerful molecular markers, their application to non-model organisms can bring significant challenges. Among these are imperfect conversion rates of assays designed from in silico resources and the enhanced potential for genotyping error relative to pre-validated, highly optimized human SNPs. To explore these issues, we used Illumina's GoldenGate assay to genotype 480 Antarctic fur seal (Arctocephalus gazella) individuals at 144 putative SNPs derived from a 454 transcriptome assembly. One hundred and thirty-five polymorphic SNPs (93.8%) were automatically validated by the program GenomeStudio, and the initial genotyping error rate, estimated from nine replicate samples, was 0.004 per reaction. However, an almost tenfold further reduction in the error rate was achieved by excluding 31 loci (21.5%) that exhibited unclear clustering patterns, manually editing clusters to allow rescoring of ambiguous or incorrect genotypes, and excluding 18 samples (3.8%) with unreliable genotypes. After stringent quality filtering, we also found a counter-intuitive negative relationship between in silico minor allele frequency and the conversion rate, suggesting that some of our assays may have been designed from paralogous loci. Nevertheless, we obtained over 45 000 individual SNP genotypes with a final error rate of 0.0005, indicating that the GoldenGate assay is eminently capable of generating large, high-quality data sets for non-model organisms. This has positive implications for future studies of the evolutionary, behavioural and conservation genetics of natural populations.

  16. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  17. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  18. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    NASA Astrophysics Data System (ADS)

    Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu

    2011-05-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.

  19. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  20. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  1. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  2. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  3. High-Speed Tracking Method Using Zero Phase Error Tracking-Feed-Forward (ZPET-FF) Control for High-Data-Transfer-Rate Optical Disk Drives

    NASA Astrophysics Data System (ADS)

    Koide, Daiichi; Yanagisawa, Hitoshi; Tokumaru, Haruki; Nakamura, Shoichi; Ohishi, Kiyoshi; Inomata, Koichi; Miyazaki, Toshimasa

    2004-07-01

    We describe the effectiveness of feed-forward control using the zero phase error tracking method (ZPET-FF control) of the tracking servo for high-data-transfer-rate optical disk drives, as we are developing an optical disk system to replace the conventional professional videotape recorder for recording high-definition television signals for news gathering or producing broadcast contents. The optical disk system requires a high-data-transfer-rate of more than 200 Mbps and large recording capacity. Therefore, fast and precise track-following control is indispensable. Here, we compare the characteristics of ZPET-FF control with those of conventional feedback control or repetitive control. Experimental results show that ZPET-FF control is more precise than feedback control, and the residual tracking error level is achieved with a tolerance of 10 nm at a linear velocity of 26 m/s in the experimental setup using a blue-violet laser optical head and high-density media. The feasibility of achieving precise ZPET-FF control at 15000 rpm is also presented.

  4. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  5. The cost-effectiveness and consumer acceptability of taxation strategies to reduce rates of overweight and obesity among children in Australia: study protocol

    PubMed Central

    2013-01-01

    Background Childhood obesity is a recognised public health problem and around 25% of Australian children are overweight or obese. A major contributor is the obesogenic environment which encourages over consumption of energy dense nutrient poor food. Taxation is commonly proposed as a mechanism to reduce consumption of poor food choices and hence reduce rates of obesity and overweight in the community. Methods/Design An economic model will be developed to assess the lifetime benefits and costs to a cohort of Australian children by reducing energy dense nutrient poor food consumption through taxation mechanisms. The model inputs will be derived from a series of smaller studies. Food options for taxation will be derived from literature and expert opinion, the acceptability and impact of price changes will be explored through a Citizen’s Jury and a discrete choice experiment and price elasticities will be derived from the discrete choice experiment and consumption data. Discussion The health care costs of managing rising levels of obesity are a challenge for all governments. This study will provide a unique contribution to the international knowledge base by engaging a variety of robust research techniques, with a multidisciplinary focus and be responsive to consumers from diverse socio-economic backgrounds. PMID:24330325

  6. Improving the Response Rate to a Street Survey: An Evaluation of the "But You Are Free to Accept or to Refuse" Technique.

    ERIC Educational Resources Information Center

    Gueguen, Nicolas; Pascual, Alexandre

    2005-01-01

    The "but you are free to accept or to refuse" technique is a compliance procedure in which someone is approached with a request by simply telling him/her that he/she is free to accept or to refuse the request. This semantic evocation leads to increased compliance with the request. Furthermore, in most of the studies in which this technique was…

  7. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  8. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  9. Bit-Error-Rate-Based Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc in Blu-ray Disc Optics

    NASA Astrophysics Data System (ADS)

    Tajima, Hideharu; Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Mori, Go; Akiyama, Jun; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2008-07-01

    Bit error rate (bER) of an energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with a zinc oxide (ZnO) film was measured in Blu-ray Disc (BD) optics by the partial response maximum likelihood (PRML) detection method. The experimental capacity was 40 GB in a single-layered 120 mm disc, which was about 1.6 times as high as the commercially available BD with 25 GB capacity. BER near 1 ×10-5 was obtained in an EG-SR ROM disc with a tantalum (Ta) reflective film. Practically available characteristics, including readout power margin, readout cyclability, environmental resistance, tilt margins, and focus offset margin, were also confirmed in the EG-SR ROM disc with 40 GB capacity.

  10. Bit-Error-Rate Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc with Dual-Layer Structure

    NASA Astrophysics Data System (ADS)

    Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Tajima, Hideharu; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2009-03-01

    Practically available readout characteristics were obtained in a dual-layer energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with an 80 gigabytes (GB) capacity. One of the dual layers consisted of zinc oxide and titanium films and the other layer consisted of zinc oxide and tantalum films. Bit error rates better than 3.0×10-4 were obtained with a minimum readout power of approximately 1.6 mW in both layers using a Blu-ray Disc tester by a partial response maximum likelihood (PRML) detection method. The dual-layer disc showed good tolerances in disc tilts and focus offset and also showed good readout cyclability in both layers.

  11. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  12. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  13. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  14. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  15. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  16. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  17. Smaller hospitals accept advertising.

    PubMed

    Mackesy, R

    1988-07-01

    Administrators at small- and medium-sized hospitals gradually have accepted the role of marketing in their organizations, albeit at a much slower rate than larger institutions. This update of a 1983 survey tracks the increasing competitiveness, complexity and specialization of providing health care and of advertising a small hospital's services. PMID:10288550

  18. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    SciTech Connect

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.; Heindl, Ranko

    2014-05-26

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power present in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.

  19. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system. PMID:15468703

  20. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system.

  1. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  2. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  3. Design and Demonstration of a 4×4 SFQ Network Switch Prototype System and 10-Gbps Bit-Error-Rate Measurement

    NASA Astrophysics Data System (ADS)

    Kameda, Yoshio; Hashimoto, Yoshihito; Yorozu, Shinichi

    We developed a 4×4 SFQ network switch prototype system and demonstrated its operation at 10Gbps. The system's core is composed of two SFQ chips: a 4×4 switch and a 6-channel voltage driver. The 4×4 switch chip contained both a switch fabric (i. e. a data path) and a switch scheduler (i. e. a controller). Both chips were attached to a multichip-module (MCM) carrier, which was then installed in a cryocooled system with 32 10-Gbps ports. Each chip contained about 2100 Josephson junctions on a 5-mm×5-mm die. An NEC standard 2.5-kA/cm2 fabrication process was used for the switch chip. We increased the critical current density to 10kA/cm2 for the driver chip to improve speed while maintaining wide bias margins. MCM implementation enabled us to use a hybrid critical current density technology. Voltage pulses were transferred between two chips through passive transmission lines on the MCM carrier. The cryocooled system was cooled down to about 4K using a two-stage 1-W cryocooler. We correctly operated the whole system at 10Gbps. The switch scheduler, which is driven by an on-chip clock generator, operated at 40GHz. The speed gap between SFQ and room temperature devices was filled by on-chip SFQ FIFO buffers or shift registers. We measured the bit error rate at 10Gbps and found that it was on the order of 10-13 for the 4×4 SFQ switch fabric. In addition, using semiconductor interface circuitry, we built a four-port SFQ Ethernet switch. All the components except for a compressor were installed in a standard 19-inch rack, filling a space 21 U (933.5mm or 36.75 inches) in height. After four personal computers (PCs) were connected to the switch, we have successfully transferred video data between them.

  4. Enhanced notification of infusion pump programming errors.

    PubMed

    Evans, R Scott; Carlson, Rick; Johnson, Kyle V; Palmer, Brent K; Lloyd, James F

    2010-01-01

    Hospitalized patients receive countless doses of medications through manually programmed infusion pumps. Many medication errors are the result of programming incorrect pump settings. When used appropriately, smart pumps have the potential to detect some programming errors. However, based on the current use of smart pumps, there are conflicting reports on their ability to prevent patient harm without additional capabilities and interfaces to electronic medical records (EMR). We developed a smart system that is connected to the EMR including medication charting that can detect and alert on potential pump programming errors. Acceptable programming limits of dose rate increases in addition to initial drug doses for 23 high-risk medications are monitored. During 22.5 months in a 24 bed ICU, 970 alerts (4% of 25,040 doses, 1.4 alerts per day) were generated for pump settings programmed outside acceptable limits of which 137 (14%) were found to have prevented potential harm. Monitoring pump programming at the system level rather than the pump provides access to additional patient data in the EMR including previous dosage levels, other concurrent medications and caloric intake, age, gender, vitals and laboratory results.

  5. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  6. Can reading rate acceleration improve error monitoring and cognitive abilities underlying reading in adolescents with reading difficulties and in typical readers?

    PubMed

    Horowitz-Kraus, Tzipi; Breznitz, Zvia

    2014-01-28

    Dyslexia is characterized by slow, inaccurate reading and by deficits in executive functions. The deficit in reading is exemplified by impaired error monitoring, which can be specifically shown through neuroimaging, in changes in Error-/Correct-related negativities (ERN/CRN). The current study aimed to investigate whether a reading intervention program (Reading Acceleration Program, or RAP) could improve overall reading, as well as error monitoring and other cognitive abilities underlying reading, in adolescents with reading difficulties. Participants with reading difficulties and typical readers were trained with the RAP for 8 weeks. Their reading and error monitoring were characterized both behaviorally and electrophysiologically through a lexical decision task. Behaviorally, the reading training improved "contextual reading speed" and decreased reading errors in both groups. Improvements were also seen in speed of processing, memory and visual screening. Electrophysiologically, ERN increased in both groups following training, but the increase was significantly greater in the participants with reading difficulties. Furthermore, an association between the improvement in reading speed and the change in difference between ERN and CRN amplitudes following training was seen in participants with reading difficulties. These results indicate that improving deficits in error monitoring and speed of processing are possible underlying mechanisms of the RAP intervention. We suggest that ERN is a good candidate for use as a measurement in evaluating the effect of reading training in typical and disabled readers.

  7. An Observational Study of the Impact of a Computerized Physician Order Entry System on the Rate of Medication Errors in an Orthopaedic Surgery Unit

    PubMed Central

    Hernandez, Fabien; Majoul, Elyes; Montes-Palacios, Carlota; Antignac, Marie; Cherrier, Bertrand; Doursounian, Levon; Feron, Jean-Marc; Robert, Cyrille; Hejblum, Gilles; Fernandez, Christine; Hindlet, Patrick

    2015-01-01

    Aim To assess the impact of the implementation of a Computerized Physician Order Entry (CPOE) associated with a pharmaceutical checking of medication orders on medication errors in the 3 stages of drug management (i.e. prescription, dispensing and administration) in an orthopaedic surgery unit. Methods A before-after observational study was conducted in the 66-bed orthopaedic surgery unit of a teaching hospital (700 beds) in Paris France. Direct disguised observation was used to detect errors in prescription, dispensing and administration of drugs, before and after the introduction of computerized prescriptions. Compliance between dispensing and administration on the one hand and the medical prescription on the other hand was studied. The frequencies and types of errors in prescribing, dispensing and administration were investigated. Results During the pre and post-CPOE period (two days for each period) 111 and 86 patients were observed, respectively, with corresponding 1,593 and 1,388 prescribed drugs. The use of electronic prescribing led to a significant 92% decrease in prescribing errors (479/1593 prescribed drugs (30.1%) vs 33/1388 (2.4%), p < 0.0001) and to a 17.5% significant decrease in administration errors (209/1222 opportunities (17.1%) vs 200/1413 (14.2%), p < 0.05). No significant difference was found in regards to dispensing errors (430/1219 opportunities (35.3%) vs 449/1407 (31.9%), p = 0.07). Conclusion The use of CPOE and a pharmacist checking medication orders in an orthopaedic surgery unit reduced the incidence of medication errors in the prescribing and administration stages. The study results suggest that CPOE is a convenient system for improving the quality and safety of drug management. PMID:26207363

  8. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  9. Type I Error Rates and Statistical Power for the James Second-Order Test and the Univariate F Test in Two-Way Fixed-Effects ANOVA Models under Heteroscedasticity and/or Nonnormality.

    ERIC Educational Resources Information Center

    Hsiung, Tung-Hsing; Olejnik, Stephen

    This study investigated the robustness of the James second-order test (James 1951; Wilcox, 1989) and the univariate F test under a two-factor fixed-effect analysis of variance (ANOVA) model in which cell variances were heterogeneous and/or distributions were nonnormal. With computer-simulated data, Type I error rates and statistical power for the…

  10. Effect of the Transcendental Meditation Program on Graduation, College Acceptance and Dropout Rates for Students Attending an Urban Public High School

    ERIC Educational Resources Information Center

    Colbert, Robert D.

    2013-01-01

    High school graduation rates nationally have declined in recent years, despite public and private efforts. The purpose of the current study was to determine whether practice of the Quiet Time/Transcendental Meditation® program at a medium-size urban school results in higher school graduation rates compared to students who do not receive training…

  11. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    Sampling algorithm, the proposed geostatistical realization is first evaluated through the approximate model to decide whether it is useful or not to perform a full physics simulation. This improves the acceptance rate of full physics simulations and opens the door to iteratively test the performance and improve the quality of the error model.

  12. Bit error rate analysis of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams with the help of random phase screens.

    PubMed

    Eyyuboğlu, Halil T

    2014-06-10

    Using the random phase screen approach, we carry out a simulation analysis of the probability of error performance of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams. In our scenario, these beams are intensity-modulated by the randomly generated binary symbols of an electrical message signal and then launched from the transmitter plane in equal powers. They propagate through a turbulent atmosphere modeled by a series of random phase screens. Upon arriving at the receiver plane, detection is performed in a circuitry consisting of a pin photodiode and a matched filter. The symbols detected are compared with the transmitted ones, errors are counted, and from there the probability of error is evaluated numerically. Within the range of source and propagation parameters tested, the lowest probability of error is obtained for the annular Gaussian beam. Our investigation reveals that there is hardly any difference between the aperture-averaged scintillations of the beams used, and the distinctive advantage of the annular Gaussian beam lies in the fact that the receiver aperture captures the maximum amount of power when this particular beam is launched from the transmitter plane.

  13. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  14. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  15. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  16. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  17. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology.

    PubMed

    Cafri, Guy; Kromrey, Jeffrey D; Brannick, Michael T

    2010-03-31

    This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohen's (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birge's (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta

  18. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  19. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  20. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  1. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  2. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  3. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  4. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  5. Estimates of rates and errors for measurements of direct-{gamma} and direct-{gamma} + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  6. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  7. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  8. An adaptive error-resilient video encoder

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; El Zarki, Magda

    2003-06-01

    When designing an encoder for a real-time video application over a wireless channel, we must take into consideration the unpredictable fluctuation of the quality of the channel and its impact on the transmitted video data. This uncertainty motivates the development of an adaptive video encoding mechanism that can compensate for the infidelity caused either by data loss and/or by the post-processing (error concealment) at the decoder. In this paper, we first explore the major factors that cause quality degradation. We then propose an adaptive progressive replenishment algorithm for a packet loss rate (PLR) feedback enabled system. Assuming the availability of a feedback channel, we discuss a video quality assessment method, which allows the encoder to be aware of the decoder-side perceptual quality. Finally, we present a novel dual-feedback mechanism that guarantees an acceptable level of quality at the receiver side with modest increase in the complexity of the encoder.

  9. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  10. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  11. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  12. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  13. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  14. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  15. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  16. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  17. Diagnostic 'errors' in anatomical pathology: relevance to Australian laboratories.

    PubMed

    Leong, Anthony S Y; Braye, Stephen; Bhagwandeen, Brahm

    2006-12-01

    Failure to recognise that anatomical pathology diagnosis is a process of cognitive interpretation of the morphological features present in a small tissue sample has led to the public misperception that the process is infallible. The absence of a universally accepted definition of diagnostic error makes comparison of error rates impossible and one large study of laboratories in the United States shows a significant error rate of about 5%, most of which have no major impact on patient management. A recent review of the work of one pathologist in New South Wales confirms a lack of appreciation in medical administration that variable diagnostic thresholds result in an inherent fallibility of anatomical pathology diagnoses. The outcome of the review emphasises the need to educate both public and non-pathology colleagues of the nature of our work and brings into consideration the requirement to establish baseline error rates for Australian laboratories and the role of the Royal College of Pathologists of Australasia (RCPA) in developing fair and unbiased protocols for review of diagnostic errors. The responsibility of ensuring that diagnostic error rates are kept to the minimum is a shared one. Area health services must play their part by seeking to ensure that pathologists in any laboratory are not overworked and have adequate support and back-up from pathologists with expertise in specialised areas. It has been clearly enunciated by the Royal College of Pathologists in the United Kingdom that it is not safe for any histopathology service to be operated single-handedly by one histopathologist. Service managers and clinicians have to understand that country pathologists cannot provide the full range and depth of pathology expertise in the many clinical subspecialty areas that are often practised in non-metropolitan areas. Attending clinicians share the responsibility of accepting proffered pathology diagnoses only if it conforms to the clinical context. Pathology

  18. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s{sup -1}) with polarisation division multiplexing

    SciTech Connect

    Gurkin, N V; Konyshev, V A; Novikov, A G; Treshchikov, V N; Ubaydullaev, R R

    2015-01-31

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)

  19. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s-1) with polarisation division multiplexing

    NASA Astrophysics Data System (ADS)

    Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.

    2015-01-01

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.

  20. Specific Impulse and Mass Flow Rate Error

    NASA Technical Reports Server (NTRS)

    Gregory, Don A.

    2005-01-01

    Specific impulse is defined in words in many ways. Very early in any text on rocket propulsion a phrase similar to .specific impulse is the thrust force per unit propellant weight flow per second. will be found.(2) It is only after seeing the mathematics written down does the definition mean something physically to scientists and engineers responsible for either measuring it or using someone.s value for it.

  1. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  2. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  3. Acceptability of BCG vaccination.

    PubMed

    Mande, R

    1977-01-01

    The acceptability of BCG vaccination varies a great deal according to the country and to the period when the vaccine is given. The incidence of complications has not always a direct influence on this acceptability, which depends, for a very large part, on the risk of tuberculosis in a given country at a given time.

  4. ATLAS ACCEPTANCE TEST

    SciTech Connect

    Cochrane, J. C. , Jr.; Parker, J. V.; Hinckley, W. B.; Hosack, K. W.; Mills, D.; Parsons, W. M.; Scudder, D. W.; Stokes, J. L.; Tabaka, L. J.; Thompson, M. C.; Wysocki, Frederick Joseph; Campbell, T. N.; Lancaster, D. L.; Tom, C. Y.

    2001-01-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  5. Some legal implications of pilot error.

    PubMed

    Hill, I R; Pile, R L

    1982-07-01

    Pilots are not expected to be superhuman beings, and it must therefore be accepted that they will make mistakes, some of which may have disastrous consequences. If it can be proven that the error equates with negligence in the pursuance of their duties, then they may be subjected to the full force of the Law. However, because pilot error is a multifactorial phenomenon, which is imperfectly understood, the initiation of legal proceedings may be difficult. If a penalty is to be imposed, the law demands a degree of proof which may be greater than that demanded by some investigating authorities, before implementing the appellation 'pilot error'.

  6. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  7. Who accepts first aid training?

    PubMed

    Pearn, J; Dawson, B; Leditschke, F; Petrie, G; Nixon, J

    1980-09-01

    The percentage of individuals trained in first aid skills in the general community is inadequate. We report here a study to investigate factors which influence motivation to accept voluntary training in first aid. A group of 700 randomly selected owners of inground swimming pools (a parental high-risk group) was offered a course of formal first aid instruction. Nine per cent attended the offered training course. The time commitment involved in traditional courses (eight training nights spread over four weeks) is not a deterrent, the same percentage accepting such courses as that who accept a course of one night's instruction. Cost is an important deterrent factor, consumer resistance rising over 15 cost units (one cost unit = the price of a loaf of bread). The level of competent first aid training within the community can be raised by (a) keeping to traditional course content, but (b) by ensuring a higher acceptance rate of first aid courses by a new approach to publicity campaigns, to convince prospective students of the real worth of first aid training. Questions concerning who should be taught first aid, and factors influencing motivation, are discussed.

  8. Acceptance threshold hypothesis is supported by chemical similarity of cuticular hydrocarbons in a stingless bee, Melipona asilvai.

    PubMed

    Nascimento, D L; Nascimento, F S

    2012-11-01

    The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level.

  9. Acceptance threshold hypothesis is supported by chemical similarity of cuticular hydrocarbons in a stingless bee, Melipona asilvai.

    PubMed

    Nascimento, D L; Nascimento, F S

    2012-11-01

    The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level. PMID:23053920

  10. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  11. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  12. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  13. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  14. Acceptance procedures: Microfilm printer

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1973-01-01

    Acceptance tests were made for a special order automatic additive color microfilm printer. Tests include film capacity, film transport, resolution, illumination uniformity, exposure range checks, and color cuing considerations.

  15. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  16. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  17. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  18. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  19. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  20. Sweeteners: consumer acceptance in tea.

    PubMed

    Sprowl, D J; Ehrcke, L A

    1984-09-01

    Sucrose, fructose, aspartame, and saccharin were compared for consumer preference, aftertaste, and cost to determine acceptability of the sweeteners. A 23-member taste panel evaluated tea samples for preference and aftertaste. Mean retail cost of the sweeteners were calculated and adjusted to take sweetening power into consideration. Sucrose was the least expensive and most preferred sweetener. No significant difference in preference for fructose and aspartame was found, but both sweeteners were rated significantly lower than sucrose. Saccharin was the most disliked sweetener. Fructose was the most expensive sweetener and aspartame the next most expensive. Scores for aftertaste followed the same pattern as those for preference. Thus, a strong, unpleasant aftertaste seems to be associated with a dislike for a sweetener. From the results of this study, it seems that there is no completely acceptable low-calorie substitute for sucrose available to consumers.

  1. Measurement of diffusion coefficients from solution rates of bubbles

    NASA Technical Reports Server (NTRS)

    Krieger, I. M.

    1979-01-01

    The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.

  2. Error reduction when prescribing neonatal parenteral nutrition.

    PubMed

    Brown, Cynthia L; Garrison, Nancy A; Hutchison, Alastair A

    2007-08-01

    A neonatal intensive care unit audit of 204 parenteral nutrition (PN) orders revealed a 27.9% PN prescribing error rate, with errors by pediatric residents exceeding those by neonatal nurse practitioners (NNPs) (39% versus 16%; P < 0.001). Our objective was to reduce the PN prescribing error rate by implementing an ordering improvement process. An interactive computerized PN worksheet, used voluntarily, was introduced and its impact analyzed in a retrospective cross-sectional study. A time management study was performed. Analysis of 480 PN orders revealed that the PN prescribing error rate was 11.7%, with no difference in error rates between pediatric residents and NNPs (12.3% versus 10.5%). Use of the interactive computerized PN worksheet was associated with a reduction in the prescribing error rate from 14.5 to 6.8% for all PN orders ( P = 0.016) and from 29.3 to 9.6% for peripheral PN orders ( P = 0.002). All 12 errors that occurred in the 177 PN prescriptions completed using the computerized PN worksheet were due to avoidable data entry or transcription mistakes. The time management study led to system improvements in PN ordering. We recommend that an interactive computerized PN worksheet be used to prescribe peripheral PN and thus reduce errors.

  3. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  4. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  5. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  6. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  7. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  8. Sequencing error correction without a reference genome

    PubMed Central

    2013-01-01

    Background Next (second) generation sequencing is an increasingly important tool for many areas of molecular biology, however, care must be taken when interpreting its output. Even a low error rate can cause a large number of errors due to the high number of nucleotides being sequenced. Identifying sequencing errors from true biological variants is a challenging task. For organisms without a reference genome this difficulty is even more challenging. Results We have developed a method for the correction of sequencing errors in data from the Illumina Solexa sequencing platforms. It does not require a reference genome and is of relevance for microRNA studies, unsequenced genomes, variant detection in ultra-deep sequencing and even for RNA-Seq studies of organisms with sequenced genomes where RNA editing is being considered. Conclusions The derived error model is novel in that it allows different error probabilities for each position along the read, in conjunction with different error rates depending on the particular nucleotides involved in the substitution, and does not force these effects to behave in a multiplicative manner. The model provides error rates which capture the complex effects and interactions of the three main known causes of sequencing error associated with the Illumina platforms. PMID:24350580

  9. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    After committing an error, participants tend to perform more slowly. This phenomenon is called post-error slowing (PES). Although previous studies have explored the PES effect in the context of observed errors, the issue as to whether the slowing effect generalizes across tasksets remains unclear. Further, the generation mechanisms of PES following observed errors must be examined. To address the above issues, we employed an observation-execution task in three experiments. During each trial, participants were required to mentally observe the outcomes of their partners in the observation task and then to perform their own key-press according to the mapping rules in the execution task. In Experiment 1, the same tasksets were utilized in the observation task and the execution task, and three error rate conditions (20%, 50% and 80%) were established in the observation task. The results revealed that the PES effect after observed errors was obtained in all three error rate conditions, replicating and extending previous studies. In Experiment 2, distinct stimuli and response rules were utilized in the observation task and the execution task. The result pattern was the same as that in Experiment 1, suggesting that the PES effect after observed errors was a generic adjustment process. In Experiment 3, the response deadline was shortened in the execution task to rule out the ceiling effect, and two error rate conditions (50% and 80%) were established in the observation task. The PES effect after observed errors was still obtained in the 50% and 80% error rate conditions. However, the accuracy in the post-observed error trials was comparable to that in the post-observed correct trials, suggesting that the slowing effect and improved accuracy did not rely on the same underlying mechanism. Current findings indicate that the occurrence of PES after observed errors is not dependent on the probability of observed errors, consistent with the assumption of cognitive control account

  10. SEU induced errors observed in microprocessor systems

    SciTech Connect

    Asenek, V.; Underwood, C.; Oldfield, M.; Velazco, R.; Rezgui, S.; Cheynet, P.; Ecoffet, R.

    1998-12-01

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  11. Continuous error correction for Ising anyons

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Wootton, James R.

    2016-04-01

    Quantum gates in topological quantum computation are performed by braiding non-Abelian anyons. These braiding processes can presumably be performed with very low error rates. However, to make a topological quantum computation architecture truly scalable, even rare errors need to be corrected. Error correction for non-Abelian anyons is complicated by the fact that it needs to be performed on a continuous basis, and further errors may occur while we are correcting existing ones. Here, we prove the feasibility of this task, establishing non-Abelian anyons as a viable platform for scalable quantum computation. We thereby focus on Ising anyons as the most prominent example of non-Abelian anyons and show that for these a finite error rate can indeed be corrected continuously. There is a threshold error rate pc>0 such that for all error rates p error per time step can be made exponentially small in the distance of a logical qubit.

  12. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    PubMed

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.

  13. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  14. Students Accepted on Probation.

    ERIC Educational Resources Information Center

    Lorberbaum, Caroline S.

    This report is a justification of the Dalton Junior College admissions policy designed to help students who had had academic and/or social difficulties at other schools. These students were accepted on probation, their problems carefully analyzed, and much effort devoted to those with low academic potential. They received extensive academic and…

  15. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  16. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  17. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  18. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  19. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  20. Accept/decline decision module for the liver simulated allocation model.

    PubMed

    Kim, Sang-Phil; Gupta, Diwakar; Israni, Ajay K; Kasiske, Bertram L

    2015-03-01

    Simulated allocation models (SAMs) are used to evaluate organ allocation policies. An important component of SAMs is a module that decides whether each potential recipient will accept an offered organ. The objective of this study was to develop and test accept-or-decline classifiers based on several machine-learning methods in an effort to improve the SAM for liver allocation. Feature selection and imbalance correction methods were tested and best approaches identified for application to organ transplant data. Then, we used 2011 liver match-run data to compare classifiers based on logistic regression, support vector machines, boosting, classification and regression trees, and Random Forests. Finally, because the accept-or-decline module will be embedded in a simulation model, we also developed an evaluation tool for comparing performance of predictors, which we call sample-path accuracy. The Random Forest method resulted in the smallest overall error rate, and boosting techniques had greater accuracy when both sensitivity and specificity were simultaneously considered important. Our comparisons show that no method dominates all others on all performance measures of interest. A logistic regression-based classifier is easy to implement and allows for pinpointing the contribution of each feature toward the probability of acceptance. Other methods we tested did not have a similar interpretation. The Scientific Registry of Transplant Recipients decided to use the logistic regression-based accept-decline decision module in the next generation of liver SAM.

  1. The incidence of diagnostic error in medicine.

    PubMed

    Graber, Mark L

    2013-10-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using 'trigger tools' to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  2. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    Purpose: There is increased interest in the Radiation Oncology Physics community regarding sensitivity of pre-treatment IMRT/VMAT QA to delivery errors. Consequently, tools mapping pre-treatment QA to the patient DVH have been developed. However, the quantity of plan degradation that is acceptable remains uncertain. Using DVHs adapted from prior patients’ plans, we developed a technique to determine the magnitude of various delivery errors required to degrade a treatment plan to outside the clinically accepted range. Methods: DVHs for relevant organs at risk were adapted from a population of prior patients’ plans using a machine learning algorithm to establish the clinically acceptable DVH range specific to the patient’s anatomy. We applied this technique to six low-risk prostate cancer patients treated with single-arc VMAT and compared error-induced DVH changes to the adapted DVHs to determine the magnitude of error required to push the plan outside of the acceptable range. The procedure follows: (1) Errors (systematic ' random shift of MLCs, gantry-MLC desynchronization, dose rate fluctuations, etc.) were simulated and degraded DVHs calculated using the Varian Eclipse TPS. (2) Adapted DVHs and acceptable ranges for DVHs were established. (3) Relevant dosimetric indices and corresponding acceptable ranges were calculated from the DVHs. Key indices included NTCP (Lyman-Kutcher-Burman Model) and QUANTEC’s dose-volume Objectives: s of V75Gy≤0.15 for the rectum and V75Gy≤0.25 for the bladder. Results: Degradations to the clinical plan became “unacceptable” for 19±29mm and 1.9±2.0mm systematic outward shifts of a single leaf and leaf bank, respectively. All other simulated errors fell within the acceptable range. Conclusion: Utilizing machine learning and prior patients’ plans one can predict a clinically acceptable range of DVH degradation for a specific patient. Comparing error-induced DVH degradations to this range, it is shown that single

  3. Numerical modelling errors in electrical impedance tomography.

    PubMed

    Dehghani, Hamid; Soleimani, Manuchehr

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique that aims to reconstruct images of internal impedance values of a volume of interest, based on measurements taken on the external boundary. Since most reconstruction algorithms rely on model-based approximations, it is important to ensure numerical accuracy for the model being used. This work demonstrates and highlights the importance of accurate modelling in terms of model discretization (meshing) and shows that although the predicted boundary data from a forward model may be within an accepted error, the calculated internal field, which is often used for image reconstruction, may contain errors, based on the mesh quality that will result in image artefacts.

  4. Guidelines for the assessment and acceptance of potential brain-dead organ donors

    PubMed Central

    Westphal, Glauco Adrieno; Garcia, Valter Duro; de Souza, Rafael Lisboa; Franke, Cristiano Augusto; Vieira, Kalinca Daberkow; Birckholz, Viviane Renata Zaclikevis; Machado, Miriam Cristine; de Almeida, Eliana Régia Barbosa; Machado, Fernando Osni; Sardinha, Luiz Antônio da Costa; Wanzuita, Raquel; Silvado, Carlos Eduardo Soares; Costa, Gerson; Braatz, Vera; Caldeira Filho, Milton; Furtado, Rodrigo; Tannous, Luana Alves; de Albuquerque, André Gustavo Neves; Abdala, Edson; Gonçalves, Anderson Ricardo Roman; Pacheco-Moreira, Lúcio Filgueiras; Dias, Fernando Suparregui; Fernandes, Rogério; Giovanni, Frederico Di; de Carvalho, Frederico Bruzzi; Fiorelli, Alfredo; Teixeira, Cassiano; Feijó, Cristiano; Camargo, Spencer Marcantonio; de Oliveira, Neymar Elias; David, André Ibrahim; Prinz, Rafael Augusto Dantas; Herranz, Laura Brasil; de Andrade, Joel

    2016-01-01

    Organ transplantation is the only alternative for many patients with terminal diseases. The increasing disproportion between the high demand for organ transplants and the low rate of transplants actually performed is worrisome. Some of the causes of this disproportion are errors in the identification of potential organ donors and in the determination of contraindications by the attending staff. Therefore, the aim of the present document is to provide guidelines for intensive care multi-professional staffs for the recognition, assessment and acceptance of potential organ donors. PMID:27737418

  5. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  6. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  7. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  8. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  9. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  10. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  11. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  12. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  13. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  14. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  15. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  16. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  17. On-Machine Acceptance

    SciTech Connect

    Arnold, K.F.

    2000-02-14

    Probing processes are used intermittently and not effectively as an on-line measurement device. This project was needed to evolve machine probing from merely a setup aid to an on-the-machine inspection system. Use of probing for on-machine inspection would significantly decrease cycle time by elimination of the need for first-piece inspection (at a remote location). Federal Manufacturing and Technologies (FM and T) had the manufacturing facility and the ability to integrate the system into production. The Contractor had a system that could optimize the machine tool to compensate for thermal growth and related error.

  18. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  19. A class of error estimators based on interpolating the finite element solutions for reaction-diffusion equations

    SciTech Connect

    Lin, T.; Wang, H.

    1995-12-31

    The swift improvement of computational capabilities enables us to apply finite element methods to simulate more and more problems arising from various applications. A fundamental question associated with finite element simulations is their accuracy. In other words, before we can make any decisions based on the numerical solutions, we must be sure that they are acceptable in the sense that their errors are within the given tolerances. Various estimators have been developed to assess the accuracy of finite element solutions, and they can be classified basically into two types: a priori error estimates and a posteriori error estimates. While a priori error estimates can give us asymptotic convergence rates of numerical solutions in terms of the grid size before the computations, they depend on certain Sobolev norms of the true solutions which are not known, in general. Therefore, it is difficult, if not impossible, to use a priori estimates directly to decide whether a numerical solution is acceptable or a finer partition (and so a new numerical solution) is needed. In contrast, a posteriori error estimates depends only on the numerical solutions, and they usually give computable quantities about the accuracy of the numerical solutions.

  20. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  1. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses. PMID:17920932

  2. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  3. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  4. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  5. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  6. Maternal acceptance of human papillomavirus vaccine in Malaysia.

    PubMed

    Sam, I-Ching; Wong, Li-Ping; Rampal, Sanjay; Leong, Yin-Hui; Pang, Chan-Fu; Tai, Yong-Ting; Tee, Hwee-Ching; Kahar-Bador, Maria

    2009-06-01

    Acceptability rates of human papillomavirus (HPV) vaccination by 362 Malaysian mothers were 65.7% and 55.8% for daughters and sons, respectively. Younger mothers, and those who knew someone with cancer, were more willing to vaccinate their daughters. If the vaccine was routine and cost free, acceptability rate was 97.8%. PMID:19465327

  7. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  8. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  9. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  10. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  11. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. Children acceptance of laser dental treatment

    NASA Astrophysics Data System (ADS)

    Lazea, Andreea; Todea, Carmen

    2016-03-01

    Objectives: To evaluate the dental anxiety level and the degree of acceptance of laser assisted pedodontic treatments from the children part. Also, we want to underline the advantages of laser use in pediatric dentistry, to make this technology widely used in treating dental problems of our children patients. Methods: Thirty pediatric dental patients presented in the Department of Pedodontics, University of Medicine and Pharmacy "Victor Babeş", Timişoara were evaluated using the Wong-Baker pain rating scale, wich was administered postoperatory to all patients, to assess their level of laser therapy acceptance. Results: Wong-Baker faces pain rating scale (WBFPS) has good validity and high specificity; generally it's easy for children to use, easy to compare and has good feasibility. Laser treatment has been accepted and tolerated by pediatric patients for its ability to reduce or eliminate pain. Around 70% of the total sample showed an excellent acceptance of laser dental treatment. Conclusions: Laser technology is useful and effective in many clinical situations encountered in pediatric dentistry and a good level of pacient acceptance is reported during all laser procedures on hard and soft tissues.

  13. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  14. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  15. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  16. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  17. Using Errors by Guard Honeybees (Apis mellifera) to Gain New Insights into Nestmate Recognition Signals.

    PubMed

    Pradella, Duccio; Martin, Stephen J; Dani, Francesca R

    2015-11-01

    Although the honeybee (Apis mellifera) is one of the world most studied insects, the chemical compounds used in nestmate recognition, remains an open question. By exploiting the error prone recognition system of the honeybee, coupled with genotyping, we studied the correlation between cuticular hydrocarbon (CHC) profile of returning foragers and acceptance or rejection behavior by guards. We revealed an average recognition error rate of 14% across 3 study colonies, that is, allowing a non-nestmate colony entry, or preventing a nestmate from entry, which is lower than reported in previous studies. By analyzing CHCs, we found that CHC profile of returning foragers correlates with acceptance or rejection by guarding bees. Although several CHC were identified as potential recognition cues, only a subset of 4 differed consistently for their relative amount between accepted and rejected individuals in the 3 studied colonies. These include a unique group of 2 positional alkene isomers (Z-8 and Z-10), which are almost exclusively produced by the bees Bombus and Apis spp, and may be candidate compounds for further study.

  18. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  19. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  20. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  1. Sonic boom acceptability studies

    NASA Astrophysics Data System (ADS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; McCurdy, David A.

    1992-04-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  2. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  3. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    NASA Astrophysics Data System (ADS)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no

  4. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  5. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  6. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  7. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  8. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  9. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  10. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  11. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  12. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  13. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  14. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  15. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  16. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  17. Foliated Quantum Error-Correcting Codes.

    PubMed

    Bolt, A; Duclos-Cianci, G; Poulin, D; Stace, T M

    2016-08-12

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation. PMID:27563942

  18. Foliated Quantum Error-Correcting Codes

    NASA Astrophysics Data System (ADS)

    Bolt, A.; Duclos-Cianci, G.; Poulin, D.; Stace, T. M.

    2016-08-01

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation.

  19. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  20. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  1. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  2. Encoding of Sensory Prediction Errors in the Human Cerebellum

    PubMed Central

    Schlerf, John; Ivry, Richard B.; Diedrichsen, Jörn

    2015-01-01

    A central tenet of motor neuroscience is that the cerebellum learns from sensory prediction errors. Surprisingly, neuroimaging studies have not revealed definitive signatures of error processing in the cerebellum. Furthermore, neurophysiologic studies suggest an asymmetry, such that the cerebellum may encode errors arising from unexpected sensory events, but not errors reflecting the omission of expected stimuli. We conducted an imaging study to compare the cerebellar response to these two types of errors. Participants made fast out-and-back reaching movements, aiming either for an object that delivered a force pulse if intersected or for a gap between two objects, either of which delivered a force pulse if intersected. Errors (missing the target) could therefore be signaled either through the presence or absence of a force pulse. In an initial analysis, the cerebellar BOLD response was smaller on trials with errors compared with trials without errors. However, we also observed an error-related decrease in heart rate. After correcting for variation in heart rate, increased activation during error trials was observed in the hand area of lobules V and VI. This effect was similar for the two error types. The results provide evidence for the encoding of errors resulting from either the unexpected presence or unexpected absence of sensory stimulation in the human cerebellum. PMID:22492047

  3. Medication errors in primary care in Riyadh City, Saudi Arabia.

    PubMed

    Khoja, T; Neyaz, Y; Qureshi, N A; Magzoub, M A; Haycox, A; Walley, T

    2011-02-01

    Medication errors can cause a variety of adverse drug events but are potentially preventable. This cross-sectional study analysed all medication prescriptions from 5 public and 5 private primary health care clinics in Riyadh city, collected by simple random sampling during 1 working day. Prescriptions for 2463 and 2836 drugs from public and private clinics respectively were examined for errors, which were analysed using Neville et al.'s classification of prescription errors. Prescribing errors were found on 990/5299 (18.7%) prescriptions. Both type B and type C errors (major and minor nuisance) were more often associated with prescriptions from public than private clinics. Type D errors (trivial) were significantly more likely to occur with private health sector prescriptions. Type A errors (potentially serious) were rare (8/5299 drugs; 0.15%) and the rate did not differ significantly between the 2 health sectors. The development of preventive strategies for avoiding prescription errors is crucial. PMID:21735951

  4. Medical error and human factors engineering: where are we now?

    PubMed

    Gawron, Valerie J; Drury, Colin G; Fairbanks, Rollin J; Berger, Roseanne C

    2006-01-01

    The goal of human factors engineering is to optimize the relationship between humans and systems by studying human behavior, abilities, and limitations and using this knowledge to design systems for safe and effective human use. With the assumption that the human component of any system will inevitably produce errors, human factors engineers design systems and human/machine interfaces that are robust enough to reduce error rates and the effect of the inevitable error within the system. In this article, we review the extent and nature of medical error and then discuss human factors engineering tools that have potential applicability. These tools include taxonomies of human and system error and error data collection and analysis methods. Finally, we describe studies that have examined medical error, and on the basis of these studies, present conclusions about how human factors engineering can significantly reduce medical errors and their effects.

  5. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  6. Type I error control for tree classification.

    PubMed

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%.

  7. 5 CFR 531.409 - Acceptable level of competence determinations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS PAY UNDER THE GENERAL SCHEDULE Within-Grade Increases § 531.409 Acceptable level of competence... competence in his or her current position, and the employee has not been given a performance rating in any... acceptable level of competence, the within-grade increase will be granted retroactively to the beginning...

  8. Risk comparisons, conflict, and risk acceptability claims.

    PubMed

    Johnson, Branden B

    2004-02-01

    Despite many claims for and against the use of risk comparisons in risk communication, few empirical studies have explored their effect. Even fewer have examined the public's relative preferences among different kinds of risk comparisons. Two studies, published in this journal in 1990 and 2003, used seven measures of "acceptability" to examine public reaction to 14 examples of risk comparisons, as used by a hypothetical factory manager to explain risks of his ethylene oxide plant. This study examined the effect on preferences of scenarios involving low or high conflict between the factory manager and residents of the hypothetical town (as had the 2003 study), and inclusion of a claim that the comparison demonstrated the risks' acceptability. It also tested the Finucane et al. (2000) affect hypothesis that information emphasizing low risks-as in these risk comparisons-would raise benefits estimates without changing risk estimates. Using similar but revised scenarios, risk comparison examples (10 instead of 14), and evaluation measures, an opportunity sample of 303 New Jersey residents rated the comparisons, and the risks and benefits of the factory. On average, all comparisons received positive ratings on all evaluation measures in all conditions. Direct and indirect measures showed that the conflict manipulation worked; overall, No-Conflict and Conflict scenarios evoked scores that were not significantly different. The attachment to each risk comparison of a risk acceptability claim ("So our factory's risks should be acceptable to you.") did not worsen ratings relative to conditions lacking this claim. Readers who did or did not see this claim were equally likely to infer an attempt to persuade them to accept the risk from the comparison. As in the 2003 article, there was great individual variability in inferred rankings of the risk comparisons. However, exposure to the risk comparisons did not reduce risk estimates significantly (while raising benefit estimates

  9. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  10. Dissolution test acceptance sampling plans.

    PubMed

    Tsong, Y; Hammerstrom, T; Lin, K; Ong, T E

    1995-07-01

    The U.S. Pharmacopeia (USP) general monograph provides a standard for dissolution compliance with the requirements as stated in the individual USP monograph for a tablet or capsule dosage form. The acceptance rules recommended by USP have important roles in the quality control process. The USP rules and their modifications are often used as an industrial lot release sampling plan, where a lot is accepted when the tablets or capsules sampled are accepted as proof of compliance with the requirement. In this paper, the operating characteristics of the USP acceptance rules are reviewed and compared to a selected modification. The operating characteristics curves show that the USP acceptance rules are sensitive to the true mean dissolution and do not reject a lot or batch that has a large percentage of tablets that dissolve with less than the dissolution specification.

  11. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  12. Human decision error (HUMDEE) trees

    SciTech Connect

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  13. Inertial and Magnetic Sensor Data Compression Considering the Estimation Error

    PubMed Central

    Suh, Young Soo

    2009-01-01

    This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound. PMID:22454564

  14. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  15. Error catastrophe in populations under similarity-essential recombination.

    PubMed

    de Aguiar, Marcus A M; Schneider, David M; do Carmo, Eduardo; Campos, Paulo R A; Martins, Ayana B

    2015-06-01

    Organisms are often more likely to exchange genetic information with others that are similar to themselves. One of the most widely accepted mechanisms of RNA virus recombination requires substantial sequence similarity between the parental RNAs and is termed similarity-essential recombination. This mechanism may be considered analogous to assortative mating, an important form of non-random mating that can be found in animals and plants. Here we study the dynamics of haplotype frequencies in populations evolving under similarity-essential recombination. Haplotypes are represented by a genome of B biallelic loci and the Hamming distance between individuals is used as a criterion for recombination. We derive the evolution equations for the haplotype frequencies assuming that recombination does not occur if the genetic distance is larger than a critical value G and that mutation occurs at a rate μ per locus. Additionally, uniform crossover is considered. Although no fitness is directly associated to the haplotypes, we show that frequency-dependent selection emerges dynamically and governs the haplotype distribution. A critical mutation rate μc can be identified as the error threshold transition, beyond which this selective information cannot be stored. For μ<μc the distribution consists of a dominant sequence surrounded by a cloud of closely related sequences, characterizing a quasispecies. For μ>μc the distribution becomes uniform, with all haplotypes having the same frequency. In the case of extreme assortativeness, where individuals only recombine with others identical to themselves (G=0), the error threshold results μc=1/4, independently of the genome size. For weak assortativity (G=B-1)μc=2(-(B+1)) and for the case of no assortativity (G=B) μc=0. We compute the mutation threshold for 0

  16. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  17. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  18. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  19. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  20. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  1. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  2. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate. PMID:26510753

  3. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  4. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  5. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  6. Final Report for Dynamic Models for Causal Analysis of Panel Data. The Impact of Measurement Error in the Analysis of Log-Linear Rate Models: Monte Carlo Findings. Part III, Chapter 4.

    ERIC Educational Resources Information Center

    Carroll, Glenn R.; And Others

    This document is part of a series of chapters described in SO 011 759. The chapter advocates the analysis of event-histories (data giving the number, timing, and sequence of changes in a categorical dependent variable) with maximum likelihood estimators (MLE) applied to log-linear rate models. Results from a Monte Carlo investigation of the impact…

  7. Preventing medication errors in cancer chemotherapy.

    PubMed

    Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M

    1996-04-01

    Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort. PMID:8697025

  8. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  9. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  10. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  11. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  12. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  13. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  14. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  15. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  16. Continuation rates, bleeding profile acceptability, and satisfaction of women using an oral contraceptive pill containing estradiol valerate and dienogest versus a progestogen-only pill after switching from an ethinylestradiol-containing pill in a real-life setting: results of the CONTENT study

    PubMed Central

    Briggs, Paula; Serrani, Marco; Vogtländer, Kai; Parke, Susanne

    2016-01-01

    Background Oral contraceptives are still associated with high discontinuation rates, despite their efficacy. There is a wide choice of oral contraceptives available, and the aim of this study was to assess continuation rates, bleeding profile acceptability, and the satisfaction of women in the first year of using a contraceptive pill containing estradiol valerate and dienogest (E2V/DNG) versus a progestogen-only pill (POP) in a real-life setting after discontinuing an ethinylestradiol-containing pill. Methods and results In this prospective, noninterventional, observational study, 3,152 patients were included for the efficacy analyses (n=2,558 women in the E2V/DNG group and n=592 in the POP group (two patients fulfilled the criteria of the efficacy population, but the used product was not known). Women had been taking an ethinylestradiol-containing pill ≥3 months before deciding to switch to the E2V/DNG pill or a POP. Overall, 19.8% (n=506) of E2V/DNG users and 25.8% (n=153) of POP users discontinued their prescribed pill. The median time to discontinuation was 157.0 days and 127.5 days, respectively. Time to discontinuation due to bleeding (P<0.0001) or other reasons (P=0.022) was significantly longer in the E2V/DNG group versus the POP group. The E2V/DNG pill was also associated with shorter (48.7% vs 44.1%), lighter (54% vs 46.1%), and less painful bleeding (91.1% vs 73.7%) and greater user satisfaction (80.7% vs 64.6%) than POP use, within 3–5 months after switch. Conclusion The E2V/DNG pill was associated with higher rates of continuation, bleeding profile acceptability, and user satisfaction than POP use and may be an alternative option for women who are dissatisfied with their current pill. PMID:27695365

  17. Continuation rates, bleeding profile acceptability, and satisfaction of women using an oral contraceptive pill containing estradiol valerate and dienogest versus a progestogen-only pill after switching from an ethinylestradiol-containing pill in a real-life setting: results of the CONTENT study

    PubMed Central

    Briggs, Paula; Serrani, Marco; Vogtländer, Kai; Parke, Susanne

    2016-01-01

    Background Oral contraceptives are still associated with high discontinuation rates, despite their efficacy. There is a wide choice of oral contraceptives available, and the aim of this study was to assess continuation rates, bleeding profile acceptability, and the satisfaction of women in the first year of using a contraceptive pill containing estradiol valerate and dienogest (E2V/DNG) versus a progestogen-only pill (POP) in a real-life setting after discontinuing an ethinylestradiol-containing pill. Methods and results In this prospective, noninterventional, observational study, 3,152 patients were included for the efficacy analyses (n=2,558 women in the E2V/DNG group and n=592 in the POP group (two patients fulfilled the criteria of the efficacy population, but the used product was not known). Women had been taking an ethinylestradiol-containing pill ≥3 months before deciding to switch to the E2V/DNG pill or a POP. Overall, 19.8% (n=506) of E2V/DNG users and 25.8% (n=153) of POP users discontinued their prescribed pill. The median time to discontinuation was 157.0 days and 127.5 days, respectively. Time to discontinuation due to bleeding (P<0.0001) or other reasons (P=0.022) was significantly longer in the E2V/DNG group versus the POP group. The E2V/DNG pill was also associated with shorter (48.7% vs 44.1%), lighter (54% vs 46.1%), and less painful bleeding (91.1% vs 73.7%) and greater user satisfaction (80.7% vs 64.6%) than POP use, within 3–5 months after switch. Conclusion The E2V/DNG pill was associated with higher rates of continuation, bleeding profile acceptability, and user satisfaction than POP use and may be an alternative option for women who are dissatisfied with their current pill.

  18. Practical scheme for error control using feedback

    SciTech Connect

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene; Jacobs, Kurt

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  19. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  20. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  1. Non-acceptance of Technology Education by Teachers in the Field.

    ERIC Educational Resources Information Center

    Rogers, George E.; Mahler, Marty

    1994-01-01

    The Stages of Concern Questionnaire was completed by 45 Nebraska and 35 Idaho industrial technology teachers. Most Nebraska teachers failed to accept technology education. Although Idaho teachers had a higher acceptance rate, nearly 69% had not adopted it. (SK)

  2. Assessing the impact of differential genotyping errors on rare variant tests of association.

    PubMed

    Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L

    2013-01-01

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.

  3. Workload and environmental factors in hospital medication errors.

    PubMed

    Roseman, C; Booker, J M

    1995-01-01

    Nine hospital workload factors and seasonal changes in daylight and darkness were examined over a 5-year period in relation to nurse medication errors at a medical center in Anchorage, Alaska. Three workload factors, along with darkness, were found to be significant predictors of the risk of medication error. Errors increased with the number of patient days per month (OR/250 patient days = 1.61) and the number of shifts worked by temporary nursing staff (OR/10 shifts = 1.15); errors decreased with more overtime worked by permanent nursing staff members (OR/10 shifts = .85). Medication errors were 95% more likely in midwinter than in the fall, but the effect of increasing darkness was strongest; a 2-month delay was found between the level of darkness and the rate of errors. More than half of all medication errors occurred during the first 3 months of the year. PMID:7624233

  4. Transcription Errors Induce Proteotoxic Stress and Shorten Cellular Lifespan

    PubMed Central

    Vermulst, Marc; Denney, Ashley S.; Lang, Michael J.; Hung, Chao-Wei; Moore, Stephanie; Mosely, M. Arthur; Thompson, J. Will; Madden, Victoria; Gauer, Jacob; Wolfe, Katie J.; Summers, Daniel W.; Schleit, Jennifer; Sutphin, George L.; Haroon, Suraiya; Holczbauer, Agnes; Caine, Joanne; Jorgenson, James; Cyr, Douglas; Kaeberlein, Matt; Strathern, Jeffrey N.; Duncan, Mara C.; Erie, Dorothy A.

    2015-01-01

    Transcription errors occur in all living cells; however, it is unknown how these errors affect cellular health. To answer this question, we monitored yeast cells that were genetically engineered to display error-prone transcription. We discovered that these cells suffer from a profound loss in proteostasis, which sensitizes them to the expression of genes that are associated with protein-folding diseases in humans; thus, transcription errors represent a new molecular mechanism by which cells can acquire disease. We further found that the error rate of transcription increases as cells age, suggesting that transcription errors affect proteostasis particularly in aging cells. Accordingly, transcription errors accelerate the aggregation of a peptide that is implicated in Alzheimer’s disease, and shorten the lifespan of cells. These experiments reveal a novel, basic biological process that directly affects cellular health and aging. PMID:26304740

  5. Further characterization of the influence of crowding on medication errors

    PubMed Central

    Watts, Hannah; Nasim, Muhammad Umer; Sweis, Rolla; Sikka, Rishi; Kulstad, Erik

    2013-01-01

    Study Objectives: Our prior analysis suggested that error frequency increases disproportionately with Emergency department (ED) crowding. To further characterize, we measured this association while controlling for the number of charts reviewed and the presence of ambulance diversion status. We hypothesized that errors would occur significantly more frequently as crowding increased, even after controlling for higher patient volumes. Materials and Methods: We performed a prospective, observational study in a large, community hospital ED from May to October of 2009. Our ED has full-time pharmacists who review orders of patients to help identify errors prior to their causing harm. Research volunteers shadowed our ED pharmacists over discrete 4- hour time periods during their reviews of orders on patients in the ED. The total numbers of charts reviewed and errors identified were documented along with details for each error type, severity, and category. We then measured the correlation between error rate (number of errors divided by total number of charts reviewed) and ED occupancy rate while controlling for diversion status during the observational period. We estimated a sample size requirement of at least 45 errors identified to allow detection of an effect size of 0.6 based on our historical data. Results: During 324 hours of surveillance, 1171 charts were reviewed and 87 errors were identified. Median error rate per 4-hour block was 5.8% of charts reviewed (IQR 0-13). No significant change was seen with ED occupancy rate (Spearman's rho = –.08, P = .49). Median error rate during times on ambulance diversion was almost twice as large (11%, IQR 0-17), but this rate did not reach statistical significance in univariate or multivariate analysis. Conclusions: Error frequency appears to remain relatively constant across the range of crowding in our ED when controlling for patient volume via the quantity of orders reviewed. Error quantity therefore increases with crowding

  6. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  7. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  8. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  9. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  10. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  11. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  12. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective. PMID:26178457

  13. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  14. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  15. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  16. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  17. The error performance analysis over cyclic redundancy check codes

    NASA Astrophysics Data System (ADS)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  18. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  19. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  20. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    Purpose To evaluate refractive error, axial length, and relative peripheral refractive error before, during the year of, and after the onset of myopia in children who became myopic compared with emmetropes. Methods Subjects were 605 children 6 to 14 years of age who became myopic (at least −0.75 D in each meridian) and 374 emmetropic (between −0.25 D and + 1.00 D in each meridian at all visits) children participating between 1995 and 2003 in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Axial length was measured annually by A-scan ultrasonography. Relative peripheral refractive error (the difference between the spherical equivalent cycloplegic autorefraction 30° in the nasal visual field and in primary gaze) was measured using either of two autorefractors (R-1; Canon, Lake Success, NY [no longer manufactured] or WR 5100-K; Grand Seiko, Hiroshima, Japan). Refractive error was measured with the same autorefractor with the subjects under cycloplegia. Each variable in children who became myopic was compared to age-, gender-, and ethnicity-matched model estimates of emmetrope values for each annual visit from 5 years before through 5 years after the onset of myopia. Results In the sample as a whole, children who became myopic had less hyperopia and longer axial lengths than did emmetropes before and after the onset of myopia (4 years before through 5 years after for refractive error and 3 years before through 5 years after for axial length; P < 0.0001 for each year). Children who became myopic had more hyperopic relative peripheral refractive errors than did emmetropes from 2 years before onset through 5 years after onset of myopia (P < 0.002 for each year). The fastest rate of change in refractive error, axial length, and relative peripheral refractive error occurred during the year before onset rather than in any year after onset. Relative peripheral refractive error remained at a consistent level of hyperopia each

  1. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  2. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  3. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  4. Les Erreurs en Traduction (Errors in Translation). Melanges Pedagogiques, 1970.

    ERIC Educational Resources Information Center

    Billant, J.

    An experiment was carried out to investigate errors in translation exercises done by French students studying English as a second language. A code was devised to rate errors as being: (1) lexical or grammatical, and (2) related to the signifier or the signified, with further subdivisions within these groups. While this method has the advantage…

  5. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  6. Understanding the acceptance factors of an Hospital Information System: evidence from a French University Hospital

    PubMed Central

    Ologeanu-Taddei, R.; Morquin, D.; Domingo, H.; Bourret, R.

    2015-01-01

    The goal of this study was to examine the perceived usefulness, the perceived ease of use and the perceived behavioral control of a Hospital Information System (HIS) for the care staff. We administrated a questionnaire composed of open-end and closed questions, based on the main concepts of Technology Acceptance Model. As results, the perceived usefulness, ease of use and behavioral control (self-efficacy and organizational support) are correlated with medical occupations. As an example, we found that a half of the medical secretaries consider the HIS is ease of use, at the opposite to the anesthesiologists, surgeons and physicians. Medical secretaries reported also the highest rate of PBC and a high rate of PU. Pharmacists reported the highest rate of PU but a low rate of PBC, which is similar to the rate of the surgeons and physicians. Content analysis of open questions highlights factors influencing these constructs: ergonomics, errors in the documenting process, insufficient compatibility with the medical department or the occupational group. Consequently, we suggest that the gap between the perceptions of the different occupational groups may be explained by the use of different modules and by interdependency of the care stare staff. PMID:26958237

  7. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  8. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  9. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  10. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein.

  11. Error, signal, and the placement of Ctenophora sister to all other animals

    PubMed Central

    Whelan, Nathan V.; Kocot, Kevin M.; Moroz, Leonid L.

    2015-01-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  12. Acceptability of contraception for men: a review.

    PubMed

    Glasier, Anna

    2010-11-01

    Methods of contraception for use by men include condoms, withdrawal and vasectomy. Prevalence of use of a method and continuation rates are indirect measures of acceptability. Worldwide, none of these "male methods" accounts for more than 7% of contraceptive use although uptake varies considerably between countries. Acceptability can be assessed directly by asking about intended (hypothetical) use and assessing satisfaction during/after use. Since they have been around for a very long time, there are very few data of this nature on condoms (as contraceptives rather than for prevention of infection), withdrawal or vasectomy. There are direct data on the acceptability of hormonal methods for men but from relatively small clinical trials which undoubtedly do not represent the real world. Surveys undertaken among the male general public demonstrate that, whatever the setting, at least 25% of men - and in most countries substantially more - would consider using hormonal contraception. Although probably an overestimate of the number of potential users when such a method becomes available, it would appear that hormonal contraceptives for men may have an important place on the contraceptive menu. Despite commonly expressed views to the contrary, most women would trust their male partner to use a hormonal method.

  13. Error robustness evaluation of H.264/MPEG-4 AVC

    NASA Astrophysics Data System (ADS)

    Halbach, Till; Olsen, Steffen

    2004-01-01

    The robustness of the recently ratified video compression standard H.264/MPEG-4 AVC against channel errors is evaluated with the focus on rate distortion matters. After a brief introduction of the standard and an explanation of its error-resistant features, it is investigated how the error resilience tools of H.264 can be deployed best for packet-wise transmission as in ATM, H.323, and IP-based services. Further, the performances of two error concealment strategies for use in an H.264-conform decoder are compared to each other.

  14. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  15. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  16. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  17. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  18. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  19. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  20. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  1. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  2. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  3. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  4. The locative alternation: distinguishing linguistic processing cost from error signals in Broca's region.

    PubMed

    Christensen, Ken Ramshøj; Wallentin, Mikkel

    2011-06-01

    The left inferior frontal gyrus (LIFG) is known to be involved in the processing of syntactic complexity, such as word order variation. It is also known to be involved in semantic interpretation in studies of various types of semantic and pragmatic anomalies. Across neuroimaging studies of language processing, two main approaches can be found, one that contrasts anomalous and well-formed words or sentences in order to yield an error response and one that contrasts two well-formed syntactic structures differing in complexity, investigating effects of increased integration costs. The present fMRI study aimed at disentangling the error signal from the processing cost signal in LIFG. To do so, we examined the so-called Locative Alternation, which involves the contrast between the Content-Locative construction, e.g. He sprays paint on the wall, and the Container-Locative construction, e.g. He sprays the wall with paint, which have been argued to differ in processing. By including asymmetric verbs, e.g. He blocks the road with rocks vs. *He blocks rocks on the road, we were able to study the contrast between well formed and anomalous constructions. Participants performed an acceptability judgment task during fMRI. The results showed that increased syntactic integration costs yielded both increased response time as well as LIFG activation. Anomalous sentences yielded low acceptability rating but no increase in response time, yet they also evoked increased LIFG activation. Thus, the processing cost and the error signal were found to be functionally independent, but spatially overlapping in the brain.

  5. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  6. Acceptance of Others (Number Form).

    ERIC Educational Resources Information Center

    Masters, James R.; Laverty, Grace E.

    As part of the instrumentation to assess the effectiveness of the Schools Without Failure (SWF) program in 10 elementary schools in the New Castle, Pa. School District, the Acceptance of Others (Number Form) was prepared to determine pupil's attitudes toward classmates. Given a list of all class members, pupils are asked to circle a number from 1…

  7. W-025, acceptance test report

    SciTech Connect

    Roscha, V.

    1994-10-04

    This acceptance test report (ATR) has been prepared to establish the results of the field testing conducted on W-025 to demonstrate that the electrical/instrumentation systems functioned as intended by design. This is part of the RMW Land Disposal Facility.

  8. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  9. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  10. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  11. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  12. Predictive error analysis for a water resource management model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  13. Acceptance test procedure for High Pressure Water Jet System

    SciTech Connect

    Crystal, J.B.

    1995-05-30

    The overall objective of the acceptance test is to demonstrate a combined system. This includes associated tools and equipment necessary to perform cleaning in the 105 K East Basin (KE) for achieving optimum reduction in the level of contamination/dose rate on canisters prior to removal from the KE Basin and subsequent packaging for disposal. Acceptance tests shall include necessary hardware to achieve acceptance of the cleaning phase of canisters. This acceptance test procedure will define the acceptance testing criteria of the high pressure water jet cleaning fixture. The focus of this procedure will be to provide guidelines and instructions to control, evaluate and document the acceptance testing for cleaning effectiveness and method(s) of removing the contaminated surface layer from the canister presently identified in KE Basin. Additionally, the desired result of the acceptance test will be to deliver to K Basins a thoroughly tested and proven system for underwater decontamination and dose reduction. This report discusses the acceptance test procedure for the High Pressure Water Jet.

  14. College students' acceptance of potential treatments for ADHD.

    PubMed

    Carter, Stacy L

    2005-08-01

    The purpose of the current study was to investigate the influence that the professional occupation of a consultant making a treatment recommendation may have on college students' (82 women and 52 men) acceptance of a proposed treatment for a child displaying characteristics of Attention Deficit/Hyperactivity Disorder. Consultants were special education teachers, school psychologists, or physicians. The study also examined college students' ratings of treatment acceptability associated with three frequently implemented interventions of either nonspecific medication, token economy with response cost, or time-out for children with characteristics of Attention Deficit/Hyperactivity Disorder. Analysis indicated college students found a token economy intervention was the least acceptable recommendation by a physician.

  15. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  16. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    PubMed Central

    Porter, Evelyn; Barcega, Besh; Kim, Tommy Y.

    2014-01-01

    Introduction The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results We reviewed 49 simulated resuscitations. The final medication error rate for the simulation was 26.5% (95% CI 13.7% – 39.3%). On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 – 0.64). Conclusion Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees. PMID:25035756

  17. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  18. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  19. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  20. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  1. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  2. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  3. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    Promote Health and Well-being Among Middle School Educators. 20. A Systematic Review of Yoga-based Interventions for Objective and Subjective Balance Measures. 21. Disparities in Yoga Use: A Multivariate Analysis of 2007 National Health Interview Survey Data. 22. Implementing Yoga Therapy Adapted for Older Veterans Who Are Cancer Survivors. 23. Randomized, Controlled Trial of Yoga for Women With Major Depressive Disorder: Decreased Ruminations as Potential Mechanism for Effects on Depression? 24. Yoga Beyond the Metropolis: A Yoga Telehealth Program for Veterans. 25. Yoga Practice Frequency, Relationship Maintenance Behaviors, and the Potential Mediating Role of Relationally Interdependent Cognition. 26. Effects of Medical Yoga in Quality of Life, Blood Pressure, and Heart Rate in Patients With Paroxysmal Atrial Fibrillation. 27. Yoga During School May Promote Emotion Regulation Capacity in Adolescents: A Group Randomized, Controlled Study. 28. Integrated Yoga Therapy in a Single Session as a Stress Management Technique in Comparison With Other Techniques. 29. Effects of a Classroom-based Yoga Intervention on Stress and Attention in Second and Third Grade Students. 30. Improving Memory, Attention, and Executive Function in Older Adults with Yoga Therapy. 31. Reasons for Starting and Continuing Yoga. 32. Yoga and Stress Management May Buffer Against Sexual Risk-Taking Behavior Increases in College Freshmen. 33. Whole-systems Ayurveda and Yoga Therapy for Obesity: Outcomes of a Pilot Study. 34. Women�s Phenomenological Experiences of Exercise, Breathing, and the Body During Yoga for Smoking Cessation Treatment. 35. Mindfulness as a Tool for Trauma Recovery: Examination of a Gender-responsive Trauma-informed Integrative Mindfulness Program for Female Inmates. 36. Yoga After Stroke Leads to Multiple Physical Improvements. 37. Tele-Yoga in Patients With Chronic Obstructive Pulmonary Disease and Heart Failure: A Mixed-methods Study of Feasibility, Acceptability, and Safety

  4. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    Promote Health and Well-being Among Middle School Educators. 20. A Systematic Review of Yoga-based Interventions for Objective and Subjective Balance Measures. 21. Disparities in Yoga Use: A Multivariate Analysis of 2007 National Health Interview Survey Data. 22. Implementing Yoga Therapy Adapted for Older Veterans Who Are Cancer Survivors. 23. Randomized, Controlled Trial of Yoga for Women With Major Depressive Disorder: Decreased Ruminations as Potential Mechanism for Effects on Depression? 24. Yoga Beyond the Metropolis: A Yoga Telehealth Program for Veterans. 25. Yoga Practice Frequency, Relationship Maintenance Behaviors, and the Potential Mediating Role of Relationally Interdependent Cognition. 26. Effects of Medical Yoga in Quality of Life, Blood Pressure, and Heart Rate in Patients With Paroxysmal Atrial Fibrillation. 27. Yoga During School May Promote Emotion Regulation Capacity in Adolescents: A Group Randomized, Controlled Study. 28. Integrated Yoga Therapy in a Single Session as a Stress Management Technique in Comparison With Other Techniques. 29. Effects of a Classroom-based Yoga Intervention on Stress and Attention in Second and Third Grade Students. 30. Improving Memory, Attention, and Executive Function in Older Adults with Yoga Therapy. 31. Reasons for Starting and Continuing Yoga. 32. Yoga and Stress Management May Buffer Against Sexual Risk-Taking Behavior Increases in College Freshmen. 33. Whole-systems Ayurveda and Yoga Therapy for Obesity: Outcomes of a Pilot Study. 34. Women�s Phenomenological Experiences of Exercise, Breathing, and the Body During Yoga for Smoking Cessation Treatment. 35. Mindfulness as a Tool for Trauma Recovery: Examination of a Gender-responsive Trauma-informed Integrative Mindfulness Program for Female Inmates. 36. Yoga After Stroke Leads to Multiple Physical Improvements. 37. Tele-Yoga in Patients With Chronic Obstructive Pulmonary Disease and Heart Failure: A Mixed-methods Study of Feasibility, Acceptability, and Safety

  5. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  6. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  7. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  8. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  9. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  10. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  11. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  12. Predicting the acceptance of advanced rider assistance systems.

    PubMed

    Huth, Véronique; Gelau, Christhard

    2013-01-01

    The strong prevalence of human error as a crash causation factor in motorcycle accidents calls for countermeasures that help tackling this issue. Advanced rider assistance systems pursue this goal, providing the riders with support and thus contributing to the prevention of crashes. However, the systems can only enhance riding safety if the riders use them. For this reason, acceptance is a decisive aspect to be considered in the development process of such systems. In order to be able to improve behavioural acceptance, the factors that influence the intention to use the system need to be identified. This paper examines the particularities of motorcycle riding and the characteristics of this user group that should be considered when predicting the acceptance of advanced rider assistance systems. Founded on theories predicting behavioural intention, the acceptance of technologies and the acceptance of driver support systems, a model on the acceptance of advanced rider assistance systems is proposed, including the perceived safety when riding without support, the interface design and the social norm as determinants of the usage intention. Since actual usage cannot be measured in the development stage of the systems, the willingness to have the system installed on the own motorcycle and the willingness to pay for the system are analyzed, constituting relevant conditions that allow for actual usage at a later stage. Its validation with the results from user tests on four advanced rider assistance systems allows confirming the social norm and the interface design as powerful predictors of the acceptance of ARAS, while the extent of perceived safety when riding without support did not have any predictive value in the present study.

  13. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  14. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  15. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  16. Acceptance of colonoscopy requires more than test tolerance

    PubMed Central

    Condon, Amanda; Graff, Lesley; Elliot, Lawrence; Ilnyckyj, Alexandra

    2008-01-01

    BACKGROUND: Colon cancer screening, including colonoscopy, lags behind other forms of cancer screening for participation rates. The intrinsic nature of the endoscopic procedure may be an important barrier that limits patients from finding this test acceptable and affects willingness to undergo screening. With colon cancer screening programs emerging in Canada, test characteristics and their impact on acceptance warrant consideration. OBJECTIVES: To measure the acceptability of colonoscopy and define factors that contribute to procedural acceptability, in relation to another invasive gastrointestinal scope procedure, gastroscopy. PATIENTS AND METHODS: Consecutive patients undergoing a colonoscopy (n=55) or a gastroscopy (n=33) were recruited. Their procedural experience was evaluated and compared pre-endoscopy, immediately before testing and postendoscopy. Questionnaires were used to capture multiple domains of the endoscopy experience and patient characteristics. RESULTS: Patient scope groups did not differ preprocedurally for general or procedure-specific anxiety. However, the colonoscopy group did anticipate more pain. Those who had a gastroscopy demonstrated higher preprocedural acceptance than those who had a colonoscopy. The colonoscopy group had a significant decrease in scope concerns and anxiety postprocedurally. As well, they reported less pain than they anticipated. Regardless, postprocedurally, the colonoscopy group’s acceptance did not increase significantly, whereas the gastroscopy group was almost unanimous in their test acceptance. The best predictor of pretest acceptability of colonoscopy was anticipated pain. CONCLUSIONS: The findings indicate that concerns that relate specifically to colonoscopy, including anticipated pain, influence acceptability of the procedure. However, the experience of a colonoscopy does not lead to improved test acceptance, despite decreases in procedural anxiety and pain. Patients’ preprocedural views of the test are

  17. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  18. Rectifying calibration error of Goldmann applanation tonometer is easy!

    PubMed

    Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar

    2014-11-01

    Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  19. 48 CFR 12.402 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acceptance. 12.402 Section... Acceptance. (a) The acceptance paragraph in 52.212-4 is based upon the assumption that the Government will rely on the contractor's assurances that the commercial item tendered for acceptance conforms to...

  20. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.